aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1502.00195
2034880893
Air pollution monitoring is a very popular research topic and many monitoring systems have been developed. In this paper, we formulate the Bus Sensor Deployment Problem (BSDP) to select the bus routes on which sensors are deployed, and we use Chemical Reaction Optimization (CRO) to solve BSDP. CRO is a recently proposed metaheuristic designed to solve a wide range of optimization problems. Using the real world data, namely Hong Kong Island bus route data, we perform a series of simulations and the results show that CRO is capable of solving this optimization problem efficiently.
Many optimization problems have been solved using CRO since @cite_12 . Xu used CRO to solve task scheduling problem in grid computing @cite_14 . This problem is a multi-objective NP-hard optimization problem. Lam proposed a population transition problem in P2P live streaming and solved this problem using CRO in @cite_15 . Lam and Li also solved the cognitive radio spectrum allocation problem in @cite_10 . Several variants of CRO were proposed to solve the optimization problem and a self-adaptive scheme was used to control the convergence speed of CRO @cite_10 . Yu proposed a CROANN algorithm from real-coded version of CRO @cite_1 to train artificial neural networks (ANNs). CROANN used a novel stopping criteria to prevent the ANNs from being over-trained and the simulation results demonstrated that CROANN outperformed most previously proposed EA-based ANNs training methods as well as some sophisticated heuristic training methods. This shows that CRO has great potential to tackle different optimization problems like BSDP discussed in this paper.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2153503747", "2163178248", "1985749783", "2127123061", "2136554306" ], "abstract": [ "Grid computing solves high performance and high-throughput computing problems through sharing resources ranging from personal computers to supercomputers distributed around the world. One of the major problems is task scheduling, i.e., allocating tasks to resources. In addition to Makespan and Flowtime, we also take reliability of resources into account, and task scheduling is formulated as an optimization problem with three objectives. This is an NP-hard problem, and thus, metaheuristic approaches are employed to find the optimal solutions. In this paper, several versions of the Chemical Reaction Optimization (CRO) algorithm are proposed for the grid scheduling problem. CRO is a population-based metaheuristic inspired by the interactions between molecules in a chemical reaction. We compare these CRO methods with four other acknowledged metaheuristics on a wide range of instances. Simulation results show that the CRO methods generally perform better than existing methods and performance improvement is especially significant in large-scale applications.", "Evolutionary algorithms (EAs) are very popular tools to design and evolve artificial neural networks (ANNs), especially to train them. These methods have advantages over the conventional backpropagation (BP) method because of their low computational requirement when searching in a large solution space. In this paper, we employ Chemical Reaction Optimization (CRO), a newly developed global optimization method, to replace BP in training neural networks. CRO is a population-based metaheuristics mimicking the transition of molecules and their interactions in a chemical reaction. Simulation results show that CRO outperforms many EA strategies commonly used to train neural networks.", "Peer-to-peer (P2P) live streaming applications are very popular in recent years and a Markov open queueing network model was developed to study the population dynamics in P2P live streaming. Based on the model, we deduce an optimization problem, called population transition problem, with the objective of maximizing the probability of universal streaming by manipulating population transition probability matrix. We employ a chemical reaction-inspired metaheuristic, Chemical Reaction Optimization (CRO), to solve the problem. Simulation results show that CRO outperforms many commonly used strategies for controlling population transition in many practical P2P live streaming systems. This work also shows that CRO also demonstrates the usability of CRO to solve optimization problems.", "Cognitive radio can help increase the capacity of wireless networks by allowing unlicensed users to use the licensed bands, provided that the occupancy do not affect the prioritized licensed users. One of the fundamental problems in cognitive radio is how to allocate the available channels to the unlicensed users in order to maximize the utility. In this work, we develop an allocation algorithm based on the newly proposed chemical reaction-inspired metaheuristic called Chemical Reaction Optimization (CRO). We study three utility functions for utilization and fairness, with the consideration of the hardware constraint. No matter which utility function is used, simulation results show that the CRO-based algorithm always outperforms the others dramatically.", "We encounter optimization problems in our daily lives and in various research domains. Some of them are so hard that we can, at best, approximate the best solutions with (meta-) heuristic methods. However, the huge number of optimization problems and the small number of generally acknowledged methods mean that more metaheuristics are needed to fill the gap. We propose a new metaheuristic, called chemical reaction optimization (CRO), to solve optimization problems. It mimics the interactions of molecules in a chemical reaction to reach a low energy stable state. We tested the performance of CRO with three nondeterministic polynomial-time hard combinatorial optimization problems. Two of them were traditional benchmark problems and the other was a real-world problem. Simulation results showed that CRO is very competitive with the few existing successful metaheuristics, having outperformed them in some cases, and CRO achieved the best performance in the real-world problem. Moreover, with the No-Free-Lunch theorem, CRO must have equal performance as the others on average, but it can outperform all other metaheuristics when matched to the right problem type. Therefore, it provides a new approach for solving optimization problems. CRO may potentially solve those problems which may not be solvable with the few generally acknowledged approaches." ] }
1502.00303
2951619115
Dynamic texture and scene classification are two fundamental problems in understanding natural video content. Extracting robust and effective features is a crucial step towards solving these problems. However the existing approaches suffer from the sensitivity to either varying illumination, or viewpoint changing, or even camera motion, and or the lack of spatial information. Inspired by the success of deep structures in image classification, we attempt to leverage a deep structure to extract feature for dynamic texture and scene classification. To tackle with the challenges in training a deep structure, we propose to transfer some prior knowledge from image domain to video domain. To be specific, we propose to apply a well-trained Convolutional Neural Network (ConvNet) as a mid-level feature extractor to extract features from each frame, and then form a representation of a video by concatenating the first and the second order statistics over the mid-level features. We term this two-level feature extraction scheme as a Transferred ConvNet Feature (TCoF). Moreover we explore two different implementations of the TCoF scheme, i.e., the TCoF and the TCoF, in which the mean-removed frames and the difference between two adjacent frames are used as the inputs of the ConvNet, respectively. We evaluate systematically the proposed spatial TCoF and the temporal TCoF schemes on three benchmark data sets, including DynTex, YUPENN, and Maryland, and demonstrate that the proposed approach yields superior performance.
The research history of dynamic texture classification is much longer than that of the dynamic scene. The later, as far as we know, started since two dynamic scene data sets -- Maryland Dynamic Scene data set in the wild'' @cite_54 and York stabilized Dynamic Scene data set @cite_8 -- were released. Although there might not be a clear distinction in nature, the slight difference of dynamic texture from dynamic scene is that the frames in a video of dynamic texture consist of images with richer texture whereas the frames in a video of dynamic scene are a natural scene involving over time. In addition, having mentioned of the data sets, compared to dynamic textures which are usually stabilized videos, the dynamic scene data set might include some significant camera motions.
{ "cite_N": [ "@cite_54", "@cite_8" ], "mid": [ "2048790245", "1976566382" ], "abstract": [ "Scene recognition in an unconstrained setting is an open and challenging problem with wide applications. In this paper, we study the role of scene dynamics for improved representation of scenes. We subsequently propose dynamic attributes which can be augmented with spatial attributes of a scene for semantically meaningful categorization of dynamic scenes. We further explore accurate and generalizable computational models for characterizing the dynamics of unconstrained scenes. The large intra-class variation due to unconstrained settings and the complex underlying physics present challenging problems in modeling scene dynamics. Motivated by these factors, we propose using the theory of chaotic systems to capture dynamics. Due to the lack of a suitable dataset, we compiled a dataset of ‘in-the-wild’ dynamic scenes. Experimental results show that the proposed framework leads to the best classification rate among other well-known dynamic modeling techniques. We also show how these dynamic features provide a means to describe dynamic scenes with motion-attributes, which then leads to meaningful organization of the video data.", "We present the DynTex database of high-quality dynamic texture videos. It consists of over 650 sequences of dynamic textures, mostly in everyday surroundings. Additionally, we propose a scheme for the manual annotation of the sequences based on a detailed analysis of the physical processes underlying the dynamic textures. Using this scheme we describe the texture sequences in terms of both visual structure and semantic content. The videos and annotations are made publicly available for scientific research." ] }
1502.00303
2951619115
Dynamic texture and scene classification are two fundamental problems in understanding natural video content. Extracting robust and effective features is a crucial step towards solving these problems. However the existing approaches suffer from the sensitivity to either varying illumination, or viewpoint changing, or even camera motion, and or the lack of spatial information. Inspired by the success of deep structures in image classification, we attempt to leverage a deep structure to extract feature for dynamic texture and scene classification. To tackle with the challenges in training a deep structure, we propose to transfer some prior knowledge from image domain to video domain. To be specific, we propose to apply a well-trained Convolutional Neural Network (ConvNet) as a mid-level feature extractor to extract features from each frame, and then form a representation of a video by concatenating the first and the second order statistics over the mid-level features. We term this two-level feature extraction scheme as a Transferred ConvNet Feature (TCoF). Moreover we explore two different implementations of the TCoF scheme, i.e., the TCoF and the TCoF, in which the mean-removed frames and the difference between two adjacent frames are used as the inputs of the ConvNet, respectively. We evaluate systematically the proposed spatial TCoF and the temporal TCoF schemes on three benchmark data sets, including DynTex, YUPENN, and Maryland, and demonstrate that the proposed approach yields superior performance.
The aforementioned methods can be roughly divided into two categories: the approaches and the approaches. The approaches extract features from each frame in a video sequence by treating each frame as a whole, e.g., LDS @cite_35 and GIST @cite_12 . While the global approaches describe the spatial layout information well, they suffer from the sensitivity to illumination variations, viewpoint changes, or scale and rotation variations. The approaches construct a statistics (e.g., histogram) on a bunch of features extracted from local patches in each frame or local volumes in a video sequence, including LBP-TOP @cite_49 , LQP-TOP @cite_46 , BoSE @cite_57 , Bag of LDS @cite_47 . While the local approaches are robustness to transformations (e.g., rotation, illumination), they suffer from the lack of spatial layout information which is important to represent a dynamic texture or dynamic scene.
{ "cite_N": [ "@cite_35", "@cite_47", "@cite_57", "@cite_49", "@cite_46", "@cite_12" ], "mid": [ "", "1992960277", "2006656585", "2139916508", "1990063939", "1566135517" ], "abstract": [ "", "We consider the problem of categorizing video sequences of dynamic textures, i.e., nonrigid dynamical objects such as fire, water, steam, flags, etc. This problem is extremely challenging because the shape and appearance of a dynamic texture continuously change as a function of time. State-of-the-art dynamic texture categorization methods have been successful at classifying videos taken from the same viewpoint and scale by using a Linear Dynamical System (LDS) to model each video, and using distances or kernels in the space of LDSs to classify the videos. However, these methods perform poorly when the video sequences are taken under a different viewpoint or scale. In this paper, we propose a novel dynamic texture categorization framework that can handle such changes. We model each video sequence with a collection of LDSs, each one describing a small spatiotemporal patch extracted from the video. This Bag-of-Systems (BoS) representation is analogous to the Bag-of-Features (BoF) representation for object recognition, except that we use LDSs as feature descriptors. This choice poses several technical challenges in adopting the traditional BoF approach. Most notably, the space of LDSs is not euclidean; hence, novel methods for clustering LDSs and computing codewords of LDSs need to be developed. We propose a framework that makes use of nonlinear dimensionality reduction and clustering techniques combined with the Martin distance for LDSs to tackle these issues. Our experiments compare the proposed BoS approach to existing dynamic texture categorization methods and show that it can be used for recognizing dynamic textures in challenging scenarios which could not be handled by existing methods.", "This paper presents a unified bag of visual word (BoW) framework for dynamic scene recognition. The approach builds on primitive features that uniformly capture spatial and temporal orientation structure of the imagery (e.g., video), as extracted via application of a bank of spatiotemporally oriented filters. Various feature encoding techniques are investigated to abstract the primitives to an intermediate representation that is best suited to dynamic scene representation. Further, a novel approach to adaptive pooling of the encoded features is presented that captures spatial layout of the scene even while being robust to situations where camera motion and scene dynamics are confounded. The resulting overall approach has been evaluated on two standard, publically available dynamic scene datasets. The results show that in comparison to a representative set of alternatives, the proposed approach outperforms the previous state-of-the-art in classification accuracy by 10 .", "Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation", "One of the principal causes for image quality degradation is blur. This frequent phenomenon is usually a result of misfocused optics or camera motion, and it is very difficult to undo. Beyond the impaired visual quality, blurring causes problems to computer vision algorithms. In this paper, we present a simple yet powerful image descriptor, which is robust against the most common image blurs. The proposed method is based on quantizing the phase information of the local Fourier transform and it can be used to characterize the underlying image texture. We show how to construct several variants of our descriptor by varying the technique for local phase estimation and utilizing the proposed data decorrelation scheme. The descriptors are assessed in texture and face recognition experiments, and the results are compared with several state-of-the-art methods. The difference to the baseline is considerable in the case of blurred images, but also with sharp images our method gives a highly competitive performance. Highlights? We present a new blur insensitive texture descriptor. ? Descriptors are constructed by quantizing local frequency data. ? The method is assessed in texture and face classification experiments. ? We show clear improvements over LBP, Gabor, VZ-MR8, VZ-joint, and BIF columns. ? Descriptors give the best results in the case of both sharp and blurred images.", "In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category." ] }
1502.00303
2951619115
Dynamic texture and scene classification are two fundamental problems in understanding natural video content. Extracting robust and effective features is a crucial step towards solving these problems. However the existing approaches suffer from the sensitivity to either varying illumination, or viewpoint changing, or even camera motion, and or the lack of spatial information. Inspired by the success of deep structures in image classification, we attempt to leverage a deep structure to extract feature for dynamic texture and scene classification. To tackle with the challenges in training a deep structure, we propose to transfer some prior knowledge from image domain to video domain. To be specific, we propose to apply a well-trained Convolutional Neural Network (ConvNet) as a mid-level feature extractor to extract features from each frame, and then form a representation of a video by concatenating the first and the second order statistics over the mid-level features. We term this two-level feature extraction scheme as a Transferred ConvNet Feature (TCoF). Moreover we explore two different implementations of the TCoF scheme, i.e., the TCoF and the TCoF, in which the mean-removed frames and the difference between two adjacent frames are used as the inputs of the ConvNet, respectively. We evaluate systematically the proposed spatial TCoF and the temporal TCoF schemes on three benchmark data sets, including DynTex, YUPENN, and Maryland, and demonstrate that the proposed approach yields superior performance.
In this paper, we attempt to leverage a deep structure with transferred knowledge from image domain to construct a robust and effective representation for dynamic textures and scenes. To be specific, we propose to use a pre-trained ConvNet -- which has been trained on the large-scale image data set ImageNet @cite_2 , @cite_38 , @cite_43 -- as transferred (prior) knowledge, and then fine-tune the ConvNet with the frames in the videos of training set. Equipped with a trained ConvNet, we extract mid-level features from each frame in a video and represent a video by the concatenation of the first and the second order statistics over the mid-level features. Compare to previous studies, our approach possesses the following advantages:
{ "cite_N": [ "@cite_38", "@cite_43", "@cite_2" ], "mid": [ "2953391683", "2951781960", "2308045930" ], "abstract": [ "Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.", "Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their distance from the source task such that a correlation between the performance of tasks and their distance from the source task w.r.t. the proposed factors is observed.", "" ] }
1502.00138
2295168480
This paper presents a new method for automatically generating numerical invariants for imperative programs. Given a program, our procedure computes a binary input output relation on program states which over-approximates the behaviour of the program. It is compositional in the sense that it operates by decomposing the program into parts, computing an abstract meaning of each part, and then composing the meanings. Our method for approximating loop behaviour is based on first approximating the meaning of the loop body, extracting recurrence relations from that approximation, and then using the closed forms to approximate the loop. Our experiments demonstrate that on verification tasks, our method is competitive with leading invariant generation and verification tools.
Ammarguellat and Harrison present a method for detecting induction variables which is compositional in the sense that it uses closed forms for inner loops in order to recognize nested recurrences @cite_29 . Maps from variables to symbolic terms (effectively a symbolic constant propagation domain) is used as the abstract domain. Kov ' a cs presents a technique for discovering invariant polynomial equations based on solving recurrence relations @cite_11 . The simple and stratified recurrence equations considered in this paper are a strict subset of the recurrences considered in @cite_11 , but our algorithm for solving recurrences is simpler. @cite_19 presents a technique for computing -approximations of loops which uses polynomial curve-fitting to directly compute closed forms for recurrences rather than extracting recurrences and then solving them in a separate step.
{ "cite_N": [ "@cite_19", "@cite_29", "@cite_11" ], "mid": [ "2218365969", "2044738061", "1585981132" ], "abstract": [ "Many software model checkers only detect counterexamples with deep loops after exploring numerous spurious and increasingly longer counterexamples. We propose a technique that aims at eliminating this weakness by constructing auxiliary paths that represent the effect of a range of loop iterations. Unlike acceleration, which captures the exact effect of arbitrarily many loop iterations, these auxiliary paths may under-approximate the behaviour of the loops. In return, the approximation is sound with respect to the bit-vector semantics of programs. Our approach supports arbitrary conditions and assignments to arrays in the loop body, but may as a result introduce quantified conditionals. To reduce the resulting performance penalty, we present two quantifier elimination techniques specially geared towards our application. Loop under-approximation can be combined with a broad range of verification techniques. We paired our techniques with lazy abstraction and bounded model checking, and evaluated the resulting tool on a number of buffer overflow benchmarks, demonstrating its ability to efficiently detect deep counterexamples in C programs that manipulate arrays.", "The recognition of recurrence relations is important in several ways to the compilation of programs. Induction variables, the simplest form of recurrence, are pivotal in loop optimizations and dependence testing. Many recurrence relations, although expressed sequentially by the programmer, lend themselves to efficient vector or parallel computation. Despite the importance of recurrences, vectorizing and parallelizing compilers to date have recognized them only in an ad-hoc fashion. In this paper we put forth a systematic method for recognizing recurrence relations automatically. Our method has two parts. First, abstract interpretation [CC77, CC79] is used to construct a map that associates each variable assigned in a loop with a symbolic form (expression) of its value. Second, the elements of this map are matched with patterns that describe recurrence relations. The scheme is easily extensible by the addition of templates, and is able to recognize nested recurrences by the propagation of the closed forms of recurrences from inner loops. We present some applications of this method and a proof of its correctness.", "We present a method for generating polynomial invariants for a subfamily of imperative loops operating on numbers, called the P-solvable loops. The method uses algorithmic combinatorics and algebraic techniques. The approach is shown to be complete for some special cases. By completeness we mean that it generates a set of polynomial invariants from which, under additional assumptions, any polynomial invariant can be derived. These techniques are implemented in a new software package Aligator written in Mathematica and successfully tried on many programs implementing interesting algorithms working on numbers." ] }
1502.00138
2295168480
This paper presents a new method for automatically generating numerical invariants for imperative programs. Given a program, our procedure computes a binary input output relation on program states which over-approximates the behaviour of the program. It is compositional in the sense that it operates by decomposing the program into parts, computing an abstract meaning of each part, and then composing the meanings. Our method for approximating loop behaviour is based on first approximating the meaning of the loop body, extracting recurrence relations from that approximation, and then using the closed forms to approximate the loop. Our experiments demonstrate that on verification tasks, our method is competitive with leading invariant generation and verification tools.
. is a technique closely related to recurrence analysis that was pioneered in infinite-state model checking @cite_2 @cite_17 @cite_8 , and which has recently found use in program analysis @cite_22 @cite_6 @cite_10 . Given a set of reachable states and an affine transformation describing the body of a loop, acceleration computes an post-image which describes the set of reachable states after executing any number of iterations of the loop (although there is recent work on uses computes over-approximate post-images @cite_22 @cite_10 ). In contrast, our technique is rather than exact, and computes loop summaries rather than post-images. A result of these two features is that our analysis to be applied to arbitrary loops, while acceleration is classically limited to simple loops where the body consists of a sequence of assignment statements.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_6", "@cite_2", "@cite_10", "@cite_17" ], "mid": [ "2689701569", "1821880317", "", "", "2095810701", "1547584984" ], "abstract": [ "Linear Relation Analysis [CH78, Hal79] is one of the first, but still one of the most powerful, abstract interpretations working in an infinite lattice. As such, it makes use of a widening operator to enforce the convergence of fixpoint computations. While the approximation due to widening can be arbitrarily refined by delaying the application of widening, the analysis quickly becomes too expensive with the increase of delay. Previous attempts at improving the precision of widening are not completely satisfactory, since none of them is guaranteed to improve the precision of the result, and they can nevertheless increase the cost of the analysis. In this paper, we investigate an improvement of Linear Relation Analysis consisting in computing, when possible, the exact (abstract) effect of a loop. This technique is fully compatible with the use of widening, and whenever it applies, it improves both the precision and the performance of the analysis.", "Symbolic model checking provides partially effective verification procedures that can handle systems with an infinite state space. So-called “acceleration techniques” enhance the convergence of fixpoint computations by computing the transitive closure of some transitions. In this paper we develop a new framework for symbolic model checking with accelerations. We also propose and analyze new symbolic algorithms using accelerations to compute reachability sets.", "", "", "We present abstract acceleration techniques for computing loop invariants for numerical programs with linear assignments and conditionals. Whereas abstract interpretation techniques typically over-approximate the set of reachable states iteratively, abstract acceleration captures the effect of the loop with a single, non-iterative transfer function applied to the initial states at the loop head. In contrast to previous acceleration techniques, our approach applies to any linear loop without restrictions. Its novelty lies in the use of the Jordan normal form decomposition of the loop body to derive symbolic expressions for the entries of the matrix modeling the effect of η ≥ Ο iterations of the loop. The entries of such a matrix depend on η through complex polynomial, exponential and trigonometric functions. Therefore, we introduces an abstract domain for matrices that captures the linear inequality relations between these complex expressions. This results in an abstract matrix for describing the fixpoint semantics of the loop. Our approach integrates smoothly into standard abstract interpreters and can handle programs with nested loops and loops containing conditional branches. We evaluate it over small but complex loops that are commonly found in control software, comparing it with other tools for computing linear loop invariants. The loops in our benchmarks typically exhibit polynomial, exponential and oscillatory behaviors that present challenges to existing approaches. Our approach finds non-trivial invariants to prove useful bounds on the values of variables for such loops, clearly outperforming the existing approaches in terms of precision while exhibiting good performance.", "Finite linear systems are finite sets of linear functions whose guards are defined by Presburger formulas, and whose the squares matrices associated generate a finite multiplicative monoid. We prove that for finite linear systems, the accelerations of sequences of transitions always produce an effective Presburger-definable relation. We then show how to choose the good sequences of length n whose number is polynomial in n although the total number of sequences of length n is exponential in n. We implement these theoretical results in the tool FAST [FAS] (Fast Acceleration of Symbolic Transition systems). FAST computes in few seconds the minimal deterministic finite automata that represent the reachability sets of 8 well-known broadcast protocols." ] }
1502.00138
2295168480
This paper presents a new method for automatically generating numerical invariants for imperative programs. Given a program, our procedure computes a binary input output relation on program states which over-approximates the behaviour of the program. It is compositional in the sense that it operates by decomposing the program into parts, computing an abstract meaning of each part, and then composing the meanings. Our method for approximating loop behaviour is based on first approximating the meaning of the loop body, extracting recurrence relations from that approximation, and then using the closed forms to approximate the loop. Our experiments demonstrate that on verification tasks, our method is competitive with leading invariant generation and verification tools.
@cite_3 and @cite_4 present compositional analysis techniques based on predicate abstraction. In addition to predicate abstraction, there are a few papers which use numerical abstract domains for compositional analysis. These include an algorithm for detecting affine equalities between program variables @cite_23 , an algorithm for detecting polynomial equalities between program variables @cite_20 , a disjunctive polyhedra analysis which uses widening to compute loop summaries @cite_18 , and a method for automatically synthesizing transfer functions for template abstract domains using quantifier elimination @cite_0 . Our abstract domain is the set of arbitrary arithmetic formula, which is more expressive than these domains, but which (as usual) incurs a price in performance. It would be interesting to apply abstractions to our formulas to improve the performance of our analysis.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_3", "@cite_0", "@cite_23", "@cite_20" ], "mid": [ "1844529821", "139856780", "1600009974", "2140678594", "", "1603048013" ], "abstract": [ "Polyhedral analysis [9] is an abstract interpretation used for automatic discovery of invariant linear inequalities among numerical variables of a program. Convexity of this abstract domain allows efficient analysis but also loses precision via convex-hull and widening operators. To selectively recover the loss of precision, sets of polyhedra (disjunctive elements) may be used to capture more precise invariants. However a balance must be struck between precision and cost. We introduce the notion of affinity to characterize how closely related is a pair of polyhedra. Finding related elements in the polyhedron (base) domain allows the formulation of precise hull and widening operators lifted to the disjunctive (powerset extension of the) polyhedron domain. We have implemented a modular static analyzer based on the disjunctive polyhedral analysis where the relational domain and the proposed operators can progressively enhance precision at a reasonable cost.", "Loop leaping is the colloquial name given to a form of program analysis in which summaries are derived for nested loops starting from the innermost loop and proceeding in a bottom-up fashion considering one more loop at a time. Loop leaping contrasts with classical approaches to finding loop invariants that are iterative; loop leaping is compositional requiring each stratum in the nest of loops to be considered exactly once. The approach is attractive in predicate abstraction where disjunctive domains are increasingly used that present long ascending chains. This paper proposes a simple and an efficient approach for loop leaping for these domains based on viewing loops as closure operators.", "Existing program analysis tools that implement abstraction rely on saturating procedures to compute over-approximations of fixpoints. As an alternative, we propose a new algorithm to compute an over-approximation of the set of reachable states of a program by replacing loops in the control flow graph by their abstract transformer. Our technique is able to generate diagnostic information in case of property violations, which we call leaping counterexamples. We have implemented this technique and report experimental results on a set of large ANSI-C programs using abstract domains that focus on properties related to string-buffers.", "We propose a method for automatically generating abstract transformers for static analysis by abstract interpretation. The method focuses on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests. In addition to loop-free code, the same method also applies for obtaining least fixed points as functions of the precondition, which permits the analysis of loops and recursive functions. Our algorithms are based on new quantifier elimination and symbolic manipulation techniques. Given the specification of an abstract domain, and a program block, our method automatically outputs an implementation of the corresponding abstract transformer. It is thus a form of program transformation. The motivation of our work is data-flow synchronous programming languages, used for building control-command embedded systems, but it also applies to imperative and functional programming.", "", "We present a novel static analysis for approximating the algebraic relational semantics of imperative programs. Our method is based on abstract interpretation in the lattice of polynomial pseudo ideals of bounded degree – finite-dimensional vector spaces of polynomials of bounded degree which are closed under bounded degree products. For a fixed bound, the space complexity of our approach and the iterations required to converge on fixed points are bounded by a polynomial in the number of program variables. Nevertheless, for several programs taken from the literature on non-linear polynomial invariant generation, our analysis produces results that are as precise as those produced by more heavy-weight Grobner basis methods." ] }
1502.00138
2295168480
This paper presents a new method for automatically generating numerical invariants for imperative programs. Given a program, our procedure computes a binary input output relation on program states which over-approximates the behaviour of the program. It is compositional in the sense that it operates by decomposing the program into parts, computing an abstract meaning of each part, and then composing the meanings. Our method for approximating loop behaviour is based on first approximating the meaning of the loop body, extracting recurrence relations from that approximation, and then using the closed forms to approximate the loop. Our experiments demonstrate that on verification tasks, our method is competitive with leading invariant generation and verification tools.
Our linearization algorithm was inspired by Min ' e 's procedure for approximating non-linear abstract transformers @cite_13 . Min ' e 's procedure abstracts non-linear terms by linear terms with interval coefficients using the abstract value in the pre-state to derive intervals for variables. Our algorithm abstracts non-linear terms by sets of symbolic and concrete intervals, and applies to the more general setting of approximating arbitrary formulas.
{ "cite_N": [ "@cite_13" ], "mid": [ "1921152384" ], "abstract": [ "We present lightweight and generic symbolic methods to improve the precision of numerical static analyses based on Abstract Interpretation. The main idea is to simplify numerical expressions before they are fed to abstract transfer functions. An important novelty is that these simplifications are performed on-the-fly, using information gathered dynamically by the analyzer. A first method, called “linearization,” allows abstracting arbitrary expressions into affine forms with interval coefficients while simplifying them. A second method, called “symbolic constant propagation,” enhances the simplification feature of the linearization by propagating assigned expressions in a symbolic way. Combined together, these methods increase the relationality level of numerical abstract domains and make them more robust against program transformations. We show how they can be integrated within the classical interval, octagon and polyhedron domains. These methods have been incorporated within the Astree static analyzer that checks for the absence of run-time errors in embedded critical avionics software. We present an experimental proof of their usefulness." ] }
1502.00041
1583152659
We show that in an equity market model with Knightian uncertainty regarding the relative risk and covariance structure of its assets, the arbitrage function -- defined as the reciprocal of the highest return on investment that can be achieved relative to the market using nonanticipative strategies, and under any admissible market model configuration -- is a viscosity solution of an associated Hamilton-Jacobi-Bellman (HJB) equation under appropriate boundedness, continuity and Markovian assumptions on the uncertainty structure. This result generalizes that of Fernholz and Karatzas (2011), who characterized this arbitrage function as a classical solution of a Cauchy problem for this HJB equation under much stronger conditions than those needed here.
For a model with no uncertainty and with local volatility matrix @math and relative risk vector @math at time @math , the viscosity characterization was obtained in [Proposition 4.5] BHS but with additional local condition on @math and @math . This (local) condition is also a typical assumption in previous literature on stochastic control and dynamic programming, e.g., @cite_15 , @cite_17 and @cite_28 (it is even assumed in @cite_9 that @math and @math are continuous and twice differentiable in @math ).
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_15", "@cite_17" ], "mid": [ "2021318678", "", "2007849171", "2052284267" ], "abstract": [ "Given a controlled stochastic process, the reachability set is the collection of all initial data from which the state process can be driven into a target set at a specified time. Differential properties of these sets are studied by the dynamic programming principle which is proved by the Jankov-von Neumann measurable selection theorem. This principle implies that the reachability sets satisfy a geometric partial differential equation, which is the analogue of the Hamilton-Jacobi-Bellman equation for this problem. By appropriately choosing the controlled process, this connection provides a stochastic representation for mean curvature type geometric flows. Another application is the super-replication problem in financial mathematics. Several applications in this direction are also discussed.", "", "We prove a weak version of the dynamic programming principle for standard stochastic control problems and mixed control-stopping problems, which avoids the technical difficulties related to the measurable selection argument. In the Markov case, our result is tailor-made for the derivation of the dynamic programming equation in the sense of viscosity solutions.", "The optimal control problem where the state is governed by an Ito stochastic differential equation (possibly just an ordinary differential equation) is formulated in martingale terms. Under a coercivity condition (which is weaker than compactness of the control set), a convexity condition, and mild continuity hypotheses on the data, it is shown by the direct method that optimal controls exist. Hard and soft constraints are allowed. In the absence of soft constraints it is shown that there exists an optimal control that is a function only of the present time and state, i.e., the synthesis problem has a solution. The main tool here is Krylov’s Markovian Selection Theorem." ] }
1502.00374
2044296771
This paper investigates a general framework to discover categories of unlabeled scene images according to their appearances (i.e., textures and structures). We jointly solve the two coupled tasks in an unsupervised manner: 1) classifying images without predetermining the number of categories and 2) pursuing generative model for each category. In our method, each image is represented by two types of image descriptors that are effective to capture image appearances from different aspects. By treating each image as a graph vertex, we build up a graph and pose the image categorization as a graph partition process. Specifically, a partitioned subgraph can be regarded as a category of scenes and we define the probabilistic model of graph partition by accumulating the generative models of all separated categories. For efficient inference with the graph, we employ a stochastic cluster sampling algorithm, which is designed based on the Metropolis–Hasting mechanism. During the iterations of inference, the model of each category is analytically updated by a generative learning algorithm. In the experiments, our approach is validated on several challenging databases, and it outperforms other popular state-of-the-art methods. The implementation details and empirical analysis are presented as well.
Most of the methods of scene image categorization involve a procedure of supervised learning, i.e., training a multi-class predictor (classifier) with the manually labeled images @cite_10 . Unsupervised image categorization is often posed as clustering images into groups according to their contents (i.e., appearances and or structures). In some traditional methods @cite_35 , various low-level features (such as color, filter banks, and textons @cite_36 ) are first extracted from images, and a clustering algorithm (e.g., @math -means or spectral clustering) is then applied to discover categories of the samples.
{ "cite_N": [ "@cite_36", "@cite_35", "@cite_10" ], "mid": [ "2103215849", "2066066981", "1487348613" ], "abstract": [ "Subjects were asked to identify scenes after very brief exposures (<70 ms). Their performance was always above chance and improved with exposure duration, confirming that subjects can get the gist of a scene with one fixation. We propose that a simple texture analysis of the image can provide a useful cue towards rapid scene identification. Our model learns texture features across scene categories and then uses this knowledge to identify new scenes. The texture analysis leads to similar identifications and confusions as subjects with limited processing time. We conclude that early scene identification can be explained with a simple texture recognition model.", "", "A traditional approach to retrieving images is to manually annotate the image with textual keywords and then retrieve images using these keywords. Manual annotation is expensive and recently a few approaches have been proposed for automatically annotating images. These techniques usually learn a statistical model using a training set of images annotated with keywords and use this model to automatically annotate test images. While promising, these techniques have generally been tested on a few thousand images, with vocabularies of a few hundred words or less and using relatively high quality training data where the keywords are categories objects and are directly correlated with the visual data. Here, we investigate the problem of automatically annotating a large dataset of news photographs using low quality training data and a large vocabulary. We use 56,117 images and captions from Yahoo News Photos for our training and test data. The captions in the training portion of this data often contain a great deal of text most of which does not directly describe the image and as labels are, therefore noisy. We use the Normalized Continuous Relevance Models for our annotation and discuss how to speed up the model (by a factor of 10) using a voting technique. An improved distance measure also improves precision. To handle noisy text data and the large vocabulary of 4073 words, we investigate using different kinds of words for training and show that words which describe the content of the picture are significantly more useful for annotating images. Previous work on annotating images has largely dealt with high quality keywords." ] }
1502.00374
2044296771
This paper investigates a general framework to discover categories of unlabeled scene images according to their appearances (i.e., textures and structures). We jointly solve the two coupled tasks in an unsupervised manner: 1) classifying images without predetermining the number of categories and 2) pursuing generative model for each category. In our method, each image is represented by two types of image descriptors that are effective to capture image appearances from different aspects. By treating each image as a graph vertex, we build up a graph and pose the image categorization as a graph partition process. Specifically, a partitioned subgraph can be regarded as a category of scenes and we define the probabilistic model of graph partition by accumulating the generative models of all separated categories. For efficient inference with the graph, we employ a stochastic cluster sampling algorithm, which is designed based on the Metropolis–Hasting mechanism. During the iterations of inference, the model of each category is analytically updated by a generative learning algorithm. In the experiments, our approach is validated on several challenging databases, and it outperforms other popular state-of-the-art methods. The implementation details and empirical analysis are presented as well.
To handle diverse image content, some effective image representations such as bag-of-words (BoWs) are proposed @cite_39 @cite_0 , and they represent an image by using a pre-trained collection (i.e., dictionary) of visual words. Furthermore, @cite_17 present a spatial pyramid representation of BoWs by pooling words at different image scales, and this representation effectively improves results for scene categorization @cite_4 . @cite_38 propose to build an effective scene representation based on constrained and compressed domains.
{ "cite_N": [ "@cite_38", "@cite_4", "@cite_39", "@cite_0", "@cite_17" ], "mid": [ "2156890308", "2163175835", "2107034620", "2171706135", "2162915993" ], "abstract": [ "Holistic representations of natural scenes are an effective and powerful source of information for semantic classification and analysis of images. Despite the technological hardware and software advances, consumer single-sensor imaging devices technology are quite far from the ability of recognising scenes and or to exploit the visual content during (or after) acquisition time. The frequency domain has been successfully exploited to holistically encode the content of natural scenes in order to obtain a robust representation for scene classification. The authors exploit a holistic representation of the scene in the discrete cosine transform domain fully compatible with the JPEG format. The advised representation is coupled with a logistic classifier to perform classification of the scene at superordinate level of description (e.g. natural against artificial), or to discriminate between multiple classes of scenes usually acquired by a consumer imaging device (e.g. portrait, landscape and document). The proposed method is able to work in constrained domain. Experiments confirm the effectiveness of the proposed method. The obtained results closely match state-of-the-art methods in terms of accuracy outperforming in terms of computational resources.", "This paper proposes a method to recognize scene categories using bags of visual words obtained hierarchically partitioning into subregion the input images. Specifically, for each subregions the texton histogram and the extension of the sub-region is taken into account. The bags of visual words, obtained in this way, are weighted and used in a similarity measure during the categorization. Experimental tests using ten different scene categories show that the proposed approach achieves good performances with respect to the state of the art methods.", "We propose a novel approach to learn and recognize natural scene categories. Unlike previous work, it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region is represented as part of a \"theme\". In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distributions as well as the codewords distribution over the themes without supervision. We report satisfactory categorization performances on a large set of 13 categories of complex scenes.", "Given a set of images containing multiple object categories, we seek to discover those categories and their image locations without supervision. We achieve this using generative models from the statistical text literature: probabilistic Latent Semantic Analysis (pLSA), and Latent Dirichlet Allocation (LDA). In text analysis these are used to discover topics in a corpus using the bag-of-words document representation. Here we discover topics as object categories, so that an image containing instances of several categories is modelled as a mixture of topics. The models are applied to images by using a visual analogue of a word, formed by vector quantizing SIFT like region descriptors. We investigate a set of increasingly demanding scenarios, starting with image sets containing only two object categories through to sets containing multiple categories (including airplanes, cars, faces, motorbikes, spotted cats) and background clutter. The object categories sample both intra-class and scale variation, and both the categories and their approximate spatial layout are found without supervision. We also demonstrate classification of unseen images and images containing multiple objects. Performance of the proposed unsupervised method is compared to the semi-supervised approach of [7].1 1This work was sponsored in part by the EU Project CogViSys, the University of Oxford, Shell Oil, and the National Geospatial-Intelligence Agency.", "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s \"gist\" and Lowe’s SIFT descriptors." ] }
1502.00374
2044296771
This paper investigates a general framework to discover categories of unlabeled scene images according to their appearances (i.e., textures and structures). We jointly solve the two coupled tasks in an unsupervised manner: 1) classifying images without predetermining the number of categories and 2) pursuing generative model for each category. In our method, each image is represented by two types of image descriptors that are effective to capture image appearances from different aspects. By treating each image as a graph vertex, we build up a graph and pose the image categorization as a graph partition process. Specifically, a partitioned subgraph can be regarded as a category of scenes and we define the probabilistic model of graph partition by accumulating the generative models of all separated categories. For efficient inference with the graph, we employ a stochastic cluster sampling algorithm, which is designed based on the Metropolis–Hasting mechanism. During the iterations of inference, the model of each category is analytically updated by a generative learning algorithm. In the experiments, our approach is validated on several challenging databases, and it outperforms other popular state-of-the-art methods. The implementation details and empirical analysis are presented as well.
To exploit the latent semantic information of scene categories, @cite_28 discuss the probabilistic Latent Semantic Analysis (pLSA) model that can explain the distribution of features in the image as a mixture of a few semantic topics''. As an alternative model for capturing latent semantics, the Latent Dirichlet Allocation (LDA) model @cite_26 was widely used as well.
{ "cite_N": [ "@cite_28", "@cite_26" ], "mid": [ "1589362500", "1880262756" ], "abstract": [ "Given a set of images of scenes containing multiple object categories (e.g. grass, roads, buildings) our objective is to discover these objects in each image in an unsupervised manner, and to use this object distribution to perform scene classification. We achieve this discovery using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature, here applied to a bag of visual words representation for each image. The scene classification on the object distribution is carried out by a k-nearest neighbour classifier. We investigate the classification performance under changes in the visual vocabulary and number of latent topics learnt, and develop a novel vocabulary using colour SIFT descriptors. Classification performance is compared to the supervised approaches of Vogel & Schiele [19] and Oliva & Torralba [11], and the semi-supervised approach of Fei Fei & Perona [3] using their own datasets and testing protocols. In all cases the combination of (unsupervised) pLSA followed by (supervised) nearest neighbour classification achieves superior results. We show applications of this method to image retrieval with relevance feedback and to scene classification in videos.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model." ] }
1502.00374
2044296771
This paper investigates a general framework to discover categories of unlabeled scene images according to their appearances (i.e., textures and structures). We jointly solve the two coupled tasks in an unsupervised manner: 1) classifying images without predetermining the number of categories and 2) pursuing generative model for each category. In our method, each image is represented by two types of image descriptors that are effective to capture image appearances from different aspects. By treating each image as a graph vertex, we build up a graph and pose the image categorization as a graph partition process. Specifically, a partitioned subgraph can be regarded as a category of scenes and we define the probabilistic model of graph partition by accumulating the generative models of all separated categories. For efficient inference with the graph, we employ a stochastic cluster sampling algorithm, which is designed based on the Metropolis–Hasting mechanism. During the iterations of inference, the model of each category is analytically updated by a generative learning algorithm. In the experiments, our approach is validated on several challenging databases, and it outperforms other popular state-of-the-art methods. The implementation details and empirical analysis are presented as well.
On the other hand, the category number is required to be predetermined or be exhaustively selected in many previous unsupervised categorization approaches @cite_12 @cite_2 . In computer vision, the stochastic sampling algorithms @cite_9 @cite_1 @cite_24 are shown to be capable of flexibly generating new clusters, merging and removing existing clusters in a graph representation. Motivated by these works, we propose to automatically determine the number of image categories with the stochastic sampling.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_24", "@cite_2", "@cite_12" ], "mid": [ "2376316332", "2171323859", "2004284939", "2134636339", "2020636921" ], "abstract": [ "Markov chain Monte Carlo (MCMC) methods have been used in many fields (physics, chemistry, biology, and computer science) for simulation, inference, and optimization. In many applications, Markov chains are simulated for sampling from target probabilities π(X) defined on graphs G. The graph vertices represent elements of the system, the edges represent spatial relationships, while X is a vector of variables on the vertices which often take discrete values called labels or colors. Designing efficient Markov chains is a challenging task when the variables are strongly coupled. Because of this, methods such as the single-site Gibbs sampler often experience suboptimal performance. A well-celebrated algorithm, the Swendsen–Wang (SW) method, can address the coupling problem. It clusters the vertices as connected components after turning off some edges probabilistically, and changes the color of one cluster as a whole. It is known to mix rapidly under certain conditions. Unfortunately, the SW method has limited ...", "This paper presents a framework of layered graph matching for integrating graph partition and matching. The objective is to find an unknown number of corresponding graph structures in two images. We extract discriminative local primitives from both images and construct a candidacy graph whose vertices are matching candidates (i.e., a pair of primitives) and whose edges are either negative for mutual exclusion or positive for mutual consistence. Then we pose layered graph matching as a multicoloring problem on the candidacy graph and solve it using a composite cluster sampling algorithm. This algorithm assigns some vertices into a number of colors, each being a matched layer, and turns off all the remaining candidates. The algorithm iterates two steps: 1) Sampling the positive and negative edges probabilistically to form a composite cluster, which consists of a few mutually conflicting connected components (CCPs) in different colors and 2) assigning new colors to these CCPs with consistence and exclusion relations maintained, and the assignments are accepted by the Markov Chain Monte Carlo (MCMC) mechanism to preserve detailed balance. This framework demonstrates state-of-the-art performance on several applications, such as multi-object matching with large motion, shape matching and retrieval, and object localization in cluttered background.", "In order to track moving objects in long range against occlusion, interruption, and background clutter, this paper proposes a unified approach for global trajectory analysis. Instead of the traditional frame-by-frame tracking, our method recovers target trajectories based on a short sequence of video frames, e.g., 15 frames. We initially calculate a foreground map at each frame obtained from a state-of-the-art background model. An attribute graph is then extracted from the foreground map, where the graph vertices are image primitives represented by the composite features. With this graph representation, we pose trajectory analysis as a joint task of spatial graph partitioning and temporal graph matching. The task can be formulated by maximizing a posteriori under the Bayesian framework, in which we integrate the spatio-temporal contexts and the appearance models. The probabilistic inference is achieved by a data-driven Markov chain Monte Carlo algorithm. Given a period of observed frames, the algorithm simulates an ergodic and aperiodic Markov chain, and it visits a sequence of solution states in the joint space of spatial graph partitioning and temporal graph matching. In the experiments, our method is tested on several challenging videos from the public datasets of visual surveillance, and it outperforms the state-of-the-art methods.", "Topic models from the text understanding literature have shown promising results in unsupervised image categorization and object localization. Categories are treated as topics, and words are formed by vector quantizing local descriptors of image patches. Limitations of topic models include their weakness in localizing objects, and the requirement of a fairly large proportion of words coming from the object. We present a new approach that employs correspondences between images to provide information about object configuration, which in turn enhances the reliability of object localization and categorization. This approach is efficient, as it requires only a small number of correspondences. We show improved categorization and localization performance on real and synthetic data. Moreover, we can push the limits of topic models when the proportion of words coming from the object is very low.", "The goal of this paper is to evaluate and compare models and methods for learning to recognize basic entities in images in an unsupervised setting. In other words, we want to discover the objects present in the images by analyzing unlabeled data and searching for re-occurring patterns. We experiment with various baseline methods, methods based on latent variable models, as well as spectral clustering methods. The results are presented and compared both on subsets of Caltech256 and MSRC2, data sets that are larger and more challenging and that include more object classes than what has previously been reported in the literature. A rigorous framework for evaluating unsupervised object discovery methods is proposed." ] }
1501.07716
2952678541
Classic resource recommenders like Collaborative Filtering (CF) treat users as being just another entity, neglecting non-linear user-resource dynamics shaping attention and interpretation. In this paper, we propose a novel hybrid recommendation strategy that refines CF by capturing these dynamics. The evaluation results reveal that our approach substantially improves CF and, depending on the dataset, successfully competes with a computationally much more expensive Matrix Factorization variant.
Collaborative Filtering Extensions: One of our previous studies in this field @cite_35 , introduces the so-called approach, which extends CF in social tagging systems by incorporating tag and time information. This approach combines user-based and item-based CF with the information of tag frequency and recency by applying the base-level learning (BLL) equation coming from human memory theory.
{ "cite_N": [ "@cite_35" ], "mid": [ "2949474575" ], "abstract": [ "In this work we present a novel item recommendation approach that aims at improving Collaborative Filtering (CF) in social tagging systems using the information about tags and time. Our algorithm follows a two-step approach, where in the first step a potentially interesting candidate item-set is found using user-based CF and in the second step this candidate item-set is ranked using item-based CF. Within this ranking step we integrate the information of tag usage and time using the Base-Level Learning (BLL) equation coming from human memory theory that is used to determine the reuse-probability of words and tags using a power-law forgetting function. As the results of our extensive evaluation conducted on data-sets gathered from three social tagging systems (BibSonomy, CiteULike and MovieLens) show, the usage of tag-based and time information via the BLL equation also helps to improve the ranking and recommendation process of items and thus, can be used to realize an effective item recommender that outperforms two alternative algorithms which also exploit time and tag-based information." ] }
1501.07716
2952678541
Classic resource recommenders like Collaborative Filtering (CF) treat users as being just another entity, neglecting non-linear user-resource dynamics shaping attention and interpretation. In this paper, we propose a novel hybrid recommendation strategy that refines CF by capturing these dynamics. The evaluation results reveal that our approach substantially improves CF and, depending on the dataset, successfully competes with a computationally much more expensive Matrix Factorization variant.
The work of @cite_22 distinguishes between recommender systems that provide non-personalized and personalized recommendations. Whereas, non-personalized recommender systems are not based on user models, personalized ones choose resources by considering the user profile (e.g. previous user interactions or user preferences). Considerable amount of techniques have been proposed to design the user model in terms of resource recommendations @cite_32 @cite_5 . Among them, some approaches aim to provide dynamically adapted personalized recommendations to users @cite_31 .
{ "cite_N": [ "@cite_5", "@cite_31", "@cite_32", "@cite_22" ], "mid": [ "2028934448", "2058806077", "", "2018371603" ], "abstract": [ "This paper presents a successful attempt at evolving web intelligence in the tourism scenario, namely throughout two main areas: User Modeling and Recommender Systems. The first subject deals with the correct modeling of tourists’ profiles using a wide variety of techniques, such as stereotypes, keywords and psychological models. These techniques, besides presenting user interests with great coherence and completeness, allow for the reduction of several current problems such as the cold start issue, gray sheep individuals and overspecialization. The recommender system, by making use of all user models’ building blocks, brings an interesting, innovative and hybrid nature to the area, with benefits such as behavioral filtering, multi-technique resourcefulness and on-the-fly suggestions. The architecture was already tested in the scope of a prototype regarding the city of Porto, in Portugal.", "The problem of information overload has been a relevant and active research topic for the past twenty years. Since then, numerous algorithms and recommendation approaches have been proposed, which gives rise to a new type of problem: recommendation algorithm overload. Although hybrid recommendation techniques, which combine the strengths of individual recommenders, have become well-accepted, the procedure of building and tuning a hybrid recommender is still a tedious and time-consuming process. In our work, we focus on dynamically building personalized hybrid recommender systems on an individual user basis. By means of a dynamic online learning strategy we combine the most appropriate recommendation algorithms for a user based on realtime relevance feedback. Learning effectiveness of genetic algorithms, machine learning techniques and other optimization approaches will be studied in both an offline and online setting.", "", "Recommender Systems (RSs) help users search large amounts of digital contents and services by allowing them to identify the items that are likely to be more attractive or useful. RSs play an important persuasion role, as they can potentially augment the users’ trust towards in an application and orient their decisions or actions towards specific directions. This article explores the persuasiveness of RSs, presenting two vast empirical studies that address a number of research questions. First, we investigate if a design property of RSs, defined by the statistically measured quality of algorithms, is a reliable predictor of their potential for persuasion. This factor is measured in terms of perceived quality, defined by the overall satisfaction, as well as by how users judge the accuracy and novelty of recommendations. For our purposes, we designed an empirical study involving 210 subjects and implemented seven full-sized versions of a commercial RS, each one using the same interface and dataset (a subset of Netflix), but each with a different recommender algorithm. In each experimental configuration we computed the statistical quality (recall and F-measures) and collected data regarding the quality perceived by 30 users. The results show us that algorithmic attributes are less crucial than we might expect in determining the user’s perception of an RS’s quality, and suggest that the user’s judgment and attitude towards a recommender are likely to be more affected by factors related to the user experience. Second, we explore the persuasiveness of RSs in the context of large interactive TV services. We report a study aimed at assessing whether measurable persuasion effects (e.g., changes of shopping behavior) can be achieved through the introduction of a recommender. Our data, collected for more than one year, allow us to conclude that, (1) the adoption of an RS can affect both the lift factor and the conversion rate, determining an increased volume of sales and influencing the user’s decision to actually buy one of the recommended products, (2) the introduction of an RS tends to diversify purchases and orient users towards less obvious choices (the long tail), and (3) the perceived novelty of recommendations is likely to be more influential than their perceived accuracy. Overall, the results of these studies improve our understanding of the persuasion phenomena induced by RSs, and have implications that can be of interest to academic scholars, designers, and adopters of this class of systems." ] }
1501.07716
2952678541
Classic resource recommenders like Collaborative Filtering (CF) treat users as being just another entity, neglecting non-linear user-resource dynamics shaping attention and interpretation. In this paper, we propose a novel hybrid recommendation strategy that refines CF by capturing these dynamics. The evaluation results reveal that our approach substantially improves CF and, depending on the dataset, successfully competes with a computationally much more expensive Matrix Factorization variant.
A specific research topic, which is increasingly gaining popularity, is human decision making in recommender systems @cite_12 . The work presented in @cite_7 systematically analyzes recommender systems as decision support systems based on the nature of users' goals and the dynamic characteristics of the resource space (e.g., availability of resources). However, there is still lack of research on investigating user decision processes at a detailed level and considering integrating scientific facts from psychology. Thus, we consider that our proposed approach contributes to this line of research.
{ "cite_N": [ "@cite_7", "@cite_12" ], "mid": [ "2405824599", "2002317872" ], "abstract": [ "factors that influence users' decision making processes in Recommender Systems (RSs) have been investigated by a relatively vast research of empirical and theoretical nature, mostly in the field of e - commerce . In this paper, we discuss some aspects of the user experience with RSs that may affect the decision making process and outcome, and have been marginally addressed by prior research. These include the nature of users' goals and the dynamic characteristics of the resources space (e.g., availability during the search process) . We arg ue that these subjective and objective factors of the user experience with a RS call for a rethinking of the decision making process as it is normally assumed in traditional RSs, and raise a number or research challenges. These concepts are exemplified in the application domain of on - line services , specifically, hotel booking - a field where w e are carrying on a number of activities in cooperation with a large stakeholder ( Venere.com - a company of Expedia Inc.). Still, most of the arguments discussed in the paper can be extended to other domains, and have general implications for RS design and evaluation.", "Recommender systems have already proved to be valuable for coping with the information overload problem in several application domains. They provide people with suggestions for items which are likely to be of interest for them; hence, a primary function of recommender systems is to help people make good choices and decisions. However, most previous research has focused on recommendation techniques and algorithms, and less attention has been devoted to the decision making processes adopted by the users and possibly supported by the system. There is still a gap between the importance that the community gives to the assessment of recommendation algorithms and the current range of ongoing research activities concerning human decision making. Different decision-psychological phenomena can influence the decision making of users of recommender systems, and research along these lines is becoming increasingly important and popular. This special issue highlights how the coupling of recommendation algorithms with the understanding of human choice and decision making theory has the potential to benefit research and practice on recommender systems and to enable users to achieve a good balance between decision accuracy and decision effort." ] }
1501.07716
2952678541
Classic resource recommenders like Collaborative Filtering (CF) treat users as being just another entity, neglecting non-linear user-resource dynamics shaping attention and interpretation. In this paper, we propose a novel hybrid recommendation strategy that refines CF by capturing these dynamics. The evaluation results reveal that our approach substantially improves CF and, depending on the dataset, successfully competes with a computationally much more expensive Matrix Factorization variant.
Another concept within the area of recommender systems, that needs to be more extensively considered in the future, is termed as long tail recommendations. Basically, the long tail refers to the resources that have low popularity @cite_15 . It is of a huge interest to show how recommendations of these long tail resources can impact user satisfaction. Furthermore, it is important to investigate if additional revenue can be generated by the recommender systems from the long tail resources @cite_15 @cite_11 @cite_33 .
{ "cite_N": [ "@cite_15", "@cite_33", "@cite_11" ], "mid": [ "", "2079728196", "2140942692" ], "abstract": [ "", "Improving recommendation accuracy is the mostly focused target of recommendation systems, while it has been increasingly recognized that accuracy is not enough as the only quality criterion. More concepts have been proposed recently to augment the evaluation dimensions, such as similarity, diversity, long-tail, etc. Simultaneously considering multiple criteria leads to a multi-task recommendation. In this paper, a graph-based recommendation approach is proposed to effectively and flexibly trade-off among them. Our approach is considered based a 1st order Markovian graph with transition probabilities between user-item pairs. A \"cost flow\" concept is proposed over the graph, so that items with lower costs are stronger recommended to a user. The cost flows are formulated in a recursive dynamic form, whose stability is proved to be guaranteed by appropriately lower-bounding the transition costs. Furthermore, a mixture of transition costs is designed by combining three ingredients related to long-tail, focusing degree and similarity. To evaluate the ingredients, we propose an orthogonal-sparse-orthogonal nonnegative matrix tri-factorization model and an efficient multiplicative algorithm. Empirical experiments on real-world data show promising results of our approach, which could be regarded as a general framework for other affects if transition costs are designed in various ways.", "The success of \"infinite-inventory\" retailers such as Amazon.com and Netflix has been largely attributed to a \"long tail\" phenomenon. Although the majority of their inventory is not in high demand, these niche products, unavailable at limited-inventory competitors, generate a significant fraction of total revenue in aggregate. In addition, tail product availability can boost head sales by offering consumers the convenience of \"one-stop shopping\" for both their mainstream and niche tastes. However, most of existing recommender systems, especially collaborative filter based methods, can not recommend tail products due to the data sparsity issue. It has been widely acknowledged that to recommend popular products is easier yet more trivial while to recommend long tail products adds more novelty yet it is also a more challenging task. In this paper, we propose a novel suite of graph-based algorithms for the long tail recommendation. We first represent user-item information with undirected edge-weighted graph and investigate the theoretical foundation of applying Hitting Time algorithm for long tail item recommendation. To improve recommendation diversity and accuracy, we extend Hitting Time and propose efficient Absorbing Time algorithm to help users find their favorite long tail items. Finally, we refine the Absorbing Time algorithm and propose two entropy-biased Absorbing Cost algorithms to distinguish the variation on different user-item rating pairs, which further enhances the effectiveness of long tail recommendation. Empirical experiments on two real life datasets show that our proposed algorithms are effective to recommend long tail items and outperform state-of-the-art recommendation techniques." ] }
1501.07250
2951806338
This paper proposes FMAP (Forward Multi-Agent Planning), a fully-distributed multi-agent planning method that integrates planning and coordination. Although FMAP is specifically aimed at solving problems that require cooperation among agents, the flexibility of the domain-independent planning model allows FMAP to tackle multi-agent planning tasks of any type. In FMAP, agents jointly explore the plan space by building up refinement plans through a complete and flexible forward-chaining partial-order planner. The search is guided by @math , a novel heuristic function that is based on the concepts of Domain Transition Graph and frontier state and is optimized to evaluate plans in distributed environments. Agents in FMAP apply an advanced privacy model that allows them to adequately keep private information while communicating only the data of the refinement plans that is relevant to each of the participating agents. Experimental results show that FMAP is a general-purpose approach that efficiently solves tightly-coupled domains that have specialized agents and cooperative goals as well as loosely-coupled problems. Specifically, the empirical evaluation shows that FMAP outperforms current MAP systems at solving complex planning tasks that are adapted from the International Planning Competition benchmarks.
In the literature, there are two main approaches for solving MAP tasks like the one described in Example . MAP involves using an intermediary agent that has complete knowledge of the task. The or decentralized approach spreads the planning responsability among agents, which are in charge of interacting with each other to coordinate their local solutions, if necessary @cite_3 @cite_12 . The adoption of a centralized approach is aimed at improving the planner performance by taking advantage of the inherent structure of the MAP tasks @cite_8 @cite_15 . Centralized approaches assume a single planning entity that has complete knowledge of the task, which is rather unrealistic if the parties involved in the task have sensitive private information that they are not willing to disclose @cite_0 . In Example , the three agents involved in the task want to protect the information regarding their internal processes and business strategies, so a centralized setting is not an acceptable solution.
{ "cite_N": [ "@cite_8", "@cite_3", "@cite_0", "@cite_15", "@cite_12" ], "mid": [ "1541379879", "2078234787", "1982485071", "2099752828", "2042213761" ], "abstract": [ "Partially ordered plan structures are highly suitable for centralized multi-agent planning, where plans should be minimally constrained in terms of precedence between actions performed by different agents. In many cases, however, any given agent will perform its own actions in strict sequence. We take advantage of this fact to develop a hybrid of temporal partial order planning and forward-chaining planning. A sequence of actions is constructed for each agent and linked to other agents' actions by a partially ordered precedence relation as required. When agents are not too tightly coupled, this structure enables the generation of partial but strong information about the state at the end of each agent's action sequence. Such state information can be effectively exploited during search. A prototype planner within this framework has been implemented, using precondition control formulas to guide the search process.", "A common assumption made in multi-robot research is the connectedness of the underlying network. Although this seems a valid assumption for static networks, it is not realistic for mobile robotic networks, where communication between robots usually is distance dependent. Motivated by this fact, we explicitly consider the communication limitations. This paper extends the LFIP based exploration framework previously developed by (Cogn. Comput. doi: 10.1007 s12559-012-9142-7 , 2012), to address the Multi-Agent Territory Exploration (MATE-n k ) task under severe communication constraints. In MATE-n k task agents have to explore their environment to find and visit n checkpoints, which only count as \"visited\" when k agents are present at the same time. In its simplest form, the architecture consists of two layers: an \"Exploration layer\" consisting of a selection of future locations for the team for further exploring the environment, and \"Exploration and CheckpointVisit layer\", consisting of visiting the detected checkpoints while continuing the exploration task. The connectivity maintenance objective is achieved via two ways: (1) The first layer employs a leader-follower concept, where a communication zone is constructed by the leader using a distance transforms method, and (2) In the second layer we make use of a graph theory for characterizing the communication, which employs the adjacency and Laplacian matrices of the graph and their spectral properties. The proposed approach has been implemented and evaluated in several simulated environments and with varying team sizes and communication ranges. Throughout the paper, our conclusions are corroborated by the results from extensive simulations.", "Distributed or multi-agent planning extends classical AI planning to domains where several agents can plan and act together. There exist many recent developments in this discipline that range over different approaches for distributed planning algorithms, distributed plan execution processes or communication protocols among agents. One of the key issues about distributed planning is that it is the most appropriate way to tackle certain kind of planning problems, specially those where a centralized solving is unfeasible. In this paper we present a new planning framework aimed at solving planning problems in inherently distributed domains where agents have a collection of private data which cannot share with other agents. However, collaboration is required since agents are unable to accomplish its own tasks alone or, at least, can accomplish its tasks better when working with others. Our proposal motivates a new planning scheme based on a distributed heuristic search and a constraint programming resolution process.", "Many real-world planning domains, including those used in common benchmark problems, are based on multiagent scenarios. It has long been recognised that breaking down such problems into sub-problems for individual agents may help reduce overall planning complexity. This kind of approach is especially effective in domains where interaction between agents is limited. In this paper we present a fully centralised, offline, sequential, total-order planning algorithm for solving classical planning problems based on this idea. This algorithm consists of an automated decomposition process and a heuristic search method designed specifically for decomposed domains. The decomposition method is part of a preprocessing step and can be used to determine the \"multiagent nature\" of a planning problem prior to actual plan search. The heuristic search strategy is shown to effectively exploit any decompositions that are found and performs significantly better than current approaches on loosely coupled domains.", "Unorganized traffic is a generalized form of travel wherein vehicles do not adhere to any predefined lanes and can travel in-between lanes. Such travel is visible in a number of countries e.g. India, wherein it enables a higher traffic bandwidth, more overtaking and more efficient travel. These advantages are visible when the vehicles vary considerably in size and speed, in the absence of which the predefined lanes are near-optimal. Motion planning for multiple autonomous vehicles in unorganized traffic deals with deciding on the manner in which every vehicle travels, ensuring no collision either with each other or with static obstacles. In this paper the notion of predefined lanes is generalized to model unorganized travel for the purpose of planning vehicles travel. A uniform cost search is used for finding the optimal motion strategy of a vehicle, amidst the known travel plans of the other vehicles. The aim is to maximize the separation between the vehicles and static obstacles. The search is responsible for defining an optimal lane distribution among vehicles in the planning scenario. Clothoid curves are used for maintaining a lane or changing lanes. Experiments are performed by simulation over a set of challenging scenarios with a complex grid of obstacles. Additionally behaviours of overtaking, waiting for a vehicle to cross and following another vehicle are exhibited." ] }
1501.07250
2951806338
This paper proposes FMAP (Forward Multi-Agent Planning), a fully-distributed multi-agent planning method that integrates planning and coordination. Although FMAP is specifically aimed at solving problems that require cooperation among agents, the flexibility of the domain-independent planning model allows FMAP to tackle multi-agent planning tasks of any type. In FMAP, agents jointly explore the plan space by building up refinement plans through a complete and flexible forward-chaining partial-order planner. The search is guided by @math , a novel heuristic function that is based on the concepts of Domain Transition Graph and frontier state and is optimized to evaluate plans in distributed environments. Agents in FMAP apply an advanced privacy model that allows them to adequately keep private information while communicating only the data of the refinement plans that is relevant to each of the participating agents. Experimental results show that FMAP is a general-purpose approach that efficiently solves tightly-coupled domains that have specialized agents and cooperative goals as well as loosely-coupled problems. Specifically, the empirical evaluation shows that FMAP outperforms current MAP systems at solving complex planning tasks that are adapted from the International Planning Competition benchmarks.
We then focus on fully distributed MAP, that is, the problem of coordinating agents in a shared environment where information is distributed. The distributed MAP setting involves two main tasks: the of local solutions and the of the agents' plans into a global solution. Coordination can be performed at one or various stages of the distributed resolution of a MAP task. Some techniques are used for problems in which agents build local plans for the individual goals that they have been assigned. MAP is about coordinating the local plans of agents so as to mutually benefit by avoiding the duplication of effort. In this case, the goal is not to build a joint plan among entities that are functionally or spatially distributed but rather to apply to coordinate the local plans of multiple agents that are capable of achieving the problem goals by themselves @cite_24 .
{ "cite_N": [ "@cite_24" ], "mid": [ "1697381716" ], "abstract": [ "Coordination can be required whenever multiple agents plan to achieve their individual goals independently, but might mutually benefit by coordinating their plans to avoid working at cross purposes or duplicating effort. Although variations of such problems have been studied in the literature, there is as yet no agreement over a general characterization of them. In this paper, we formally define a common coordination problem subclass, which we call the Multiagent Plan Coordination Problem, that is rich enough to represent a wide variety of multiagent coordination problems. We then describe a general framework that extends the partial-order, causal-link plan representation to the multiagent case, and that treats coordination as a form of iterative repair of plan flaws between agents. We show that this algorithmic formulation can scale to the multiagent case better than can a straightforward application of the existing plan coordination techniques, highlighting fundamental differences between our algorithmic framework and these earlier approaches. We then examine whether and how the Multiagent Plan Coordination Problem can be cast as a Distributed Constraint Optimization Problem (DCOP). We do so using ADOPT, a state-of-the-art system that can solve DCOPs in an asynchronous, parallel manner using local communication between individual computational agents. We conclude with a discussion of possible extensions of our work." ] }
1501.07250
2951806338
This paper proposes FMAP (Forward Multi-Agent Planning), a fully-distributed multi-agent planning method that integrates planning and coordination. Although FMAP is specifically aimed at solving problems that require cooperation among agents, the flexibility of the domain-independent planning model allows FMAP to tackle multi-agent planning tasks of any type. In FMAP, agents jointly explore the plan space by building up refinement plans through a complete and flexible forward-chaining partial-order planner. The search is guided by @math , a novel heuristic function that is based on the concepts of Domain Transition Graph and frontier state and is optimized to evaluate plans in distributed environments. Agents in FMAP apply an advanced privacy model that allows them to adequately keep private information while communicating only the data of the refinement plans that is relevant to each of the participating agents. Experimental results show that FMAP is a general-purpose approach that efficiently solves tightly-coupled domains that have specialized agents and cooperative goals as well as loosely-coupled problems. Specifically, the empirical evaluation shows that FMAP outperforms current MAP systems at solving complex planning tasks that are adapted from the International Planning Competition benchmarks.
There is a large body of work on plan-merging techniques. The work in @cite_24 introduces a distributed coordination framework based on partial-order planning that addresses the interactions that emerge between the agents' local plans. This framework, however, does not consider privacy. The proposal in @cite_22 is based on the iterative revision of the agents' local plans. Agents in this model cooperate by mutually adapting their local plans, with a focus on improving their common or individual benefit. This approach also ignores privacy and agents are assumed to be fully cooperative. The approach in @cite_27 uses multi-agent plan repair to solve inconsistencies among the agents' local plans while maintaining privacy. @math -SATPLAN @cite_18 extends a satisfiability-based planner to coordinate the agents' local plans by studying positive and negative interactions among them.
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_18", "@cite_22" ], "mid": [ "1697381716", "1506509683", "", "2167131933" ], "abstract": [ "Coordination can be required whenever multiple agents plan to achieve their individual goals independently, but might mutually benefit by coordinating their plans to avoid working at cross purposes or duplicating effort. Although variations of such problems have been studied in the literature, there is as yet no agreement over a general characterization of them. In this paper, we formally define a common coordination problem subclass, which we call the Multiagent Plan Coordination Problem, that is rich enough to represent a wide variety of multiagent coordination problems. We then describe a general framework that extends the partial-order, causal-link plan representation to the multiagent case, and that treats coordination as a form of iterative repair of plan flaws between agents. We show that this algorithmic formulation can scale to the multiagent case better than can a straightforward application of the existing plan coordination techniques, highlighting fundamental differences between our algorithmic framework and these earlier approaches. We then examine whether and how the Multiagent Plan Coordination Problem can be cast as a Distributed Constraint Optimization Problem (DCOP). We do so using ADOPT, a state-of-the-art system that can solve DCOPs in an asynchronous, parallel manner using local communication between individual computational agents. We conclude with a discussion of possible extensions of our work.", "In dynamic environments, agents have to deal with changing situations. In these cases, repairing a plan is often more efficient than planning from scratch, but existing planning techniques are more advanced than existing plan repair techniques. Therefore, we propose a straightforward method to extend planning techniques such that they are able to repair plans. This is possible, because plan repair consists of two different operations: (i) removing obstructing constraints (such as actions) from the plan, and (ii) adding actions to achieve the goals. Adding actions is similar to planning, but as we demonstrate, planning heuristics can also be used for removing constraints, which we call unrefinement. We present a plan repair template that reflects these two operations, and we present a heuristic for unrefinement that can make use of an arbitrary existing planning technique. We apply this method to an existing planning system (VHPOP) resulting in POPR, a plan repair system that performs much better than replanning from scratch, and also significantly better than another recent plan repair method (GPG). Furthermore, we show that the plan repair template is a generalisation of existing plan repair methods.", "", "Abstract In order to model plan coordination behavior of agents we develop a simple framework for representing plans, resources and goals of agents. Plans are represented as directed acyclic graphs of skills and resources that, given adequate initial resources, can realize special resources, called goals. Given the storage costs of resources, application costs of skills, and values of goals, it is possible to reason about the profits of a plan for an agent. We then model two forms of plan coordination behavior between two agents, viz. fusion , aiming at the maximization of the total yield of the agents involved, and collaboration , which aims at the maximization of the individual yield of each agent. We argue how both forms of cooperation can be seen as iterative plan revision processes. We also present efficient polynomial algorithms for agent plan fusion and collaboration that are based on this idea of iterative plan revision. Both the framework and the fusion algorithm will be illustrated by an example from the field of transportation, where agents are transportation companies." ] }
1501.07250
2951806338
This paper proposes FMAP (Forward Multi-Agent Planning), a fully-distributed multi-agent planning method that integrates planning and coordination. Although FMAP is specifically aimed at solving problems that require cooperation among agents, the flexibility of the domain-independent planning model allows FMAP to tackle multi-agent planning tasks of any type. In FMAP, agents jointly explore the plan space by building up refinement plans through a complete and flexible forward-chaining partial-order planner. The search is guided by @math , a novel heuristic function that is based on the concepts of Domain Transition Graph and frontier state and is optimized to evaluate plans in distributed environments. Agents in FMAP apply an advanced privacy model that allows them to adequately keep private information while communicating only the data of the refinement plans that is relevant to each of the participating agents. Experimental results show that FMAP is a general-purpose approach that efficiently solves tightly-coupled domains that have specialized agents and cooperative goals as well as loosely-coupled problems. Specifically, the empirical evaluation shows that FMAP outperforms current MAP systems at solving complex planning tasks that are adapted from the International Planning Competition benchmarks.
Plan-merging techniques are not very well suited for coping with tightly-coupled tasks as they may introduce exponentially many ordering constraints in problems that require great coordination effort @cite_24 . In general, plan merging is not an effective method for attaining cooperative goals since this resolution scheme generally assumes that each agent is able to solve a subset of the task's goals by itself. However, some approaches use plan merging to coordinate local plans of specialized agents. In this case, the effort is placed on discovering the interaction points among agents through the public information that they share. For instance, Planning First @cite_5 introduces a cooperative MAP approach for loosely-coupled tasks, in which specialized agents carry out planning individually through a state-based planner. The resulting local plans are then coordinated by solving a distributed Constraint Satisfaction Problem (CSP) @cite_29 . This combination of CSP and planning to solve MAP tasks was originally introduced by the MA-STRIPS framework @cite_35 .
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_29", "@cite_35" ], "mid": [ "1697381716", "1616569656", "", "2124254247" ], "abstract": [ "Coordination can be required whenever multiple agents plan to achieve their individual goals independently, but might mutually benefit by coordinating their plans to avoid working at cross purposes or duplicating effort. Although variations of such problems have been studied in the literature, there is as yet no agreement over a general characterization of them. In this paper, we formally define a common coordination problem subclass, which we call the Multiagent Plan Coordination Problem, that is rich enough to represent a wide variety of multiagent coordination problems. We then describe a general framework that extends the partial-order, causal-link plan representation to the multiagent case, and that treats coordination as a form of iterative repair of plan flaws between agents. We show that this algorithmic formulation can scale to the multiagent case better than can a straightforward application of the existing plan coordination techniques, highlighting fundamental differences between our algorithmic framework and these earlier approaches. We then examine whether and how the Multiagent Plan Coordination Problem can be cast as a Distributed Constraint Optimization Problem (DCOP). We do so using ADOPT, a state-of-the-art system that can solve DCOPs in an asynchronous, parallel manner using local communication between individual computational agents. We conclude with a discussion of possible extensions of our work.", "We present a fully distributed multi-agent planning algorithm. Our methodology uses distributed constraint satisfaction to coordinate between agents, and local planning to ensure the consistency of these coordination points. To solve the distributed CSP efficiently, we must modify existing methods to take advantage of the structure of the underlying planning problem, m multi-agent planning domains with limited agent interaction, our algorithm empirically shows scalability beyond state of the art centralized solvers. Our work also provides a novel, real-world setting for testing and evaluating distributed constraint satisfaction algorithms in structured domains and illustrates how existing techniques can be altered to address such structure.", "", "Loosely coupled multi-agent systems are perceived as easier to plan for because they require less coordination between agent sub-plans. In this paper we set out to formalize this intuition. We establish an upper bound on the complexity of multi-agent planning problems that depends exponentially on two parameters quantifying the level of agents' coupling, and on these parameters only. The first parameter is problem-independent, and it measures the inherent level of coupling within the system. The second is problem-specific and it has to do with the minmax number of action-commitments per agent required to solve the problem. Most importantly, the direct dependence on the number of agents, on the overall size of the problem, and on the length of the agents' plans, is only polynomial. This result is obtained using a new algorithmic methodology which we call \"planning as CSP+planning\". We believe this to be one of the first formal results to both quantify the notion of agents' coupling, and to demonstrate a multi-agent planning algorithm that, for fixed coupling levels, scales polynomially with the size of the problem." ] }
1501.07250
2951806338
This paper proposes FMAP (Forward Multi-Agent Planning), a fully-distributed multi-agent planning method that integrates planning and coordination. Although FMAP is specifically aimed at solving problems that require cooperation among agents, the flexibility of the domain-independent planning model allows FMAP to tackle multi-agent planning tasks of any type. In FMAP, agents jointly explore the plan space by building up refinement plans through a complete and flexible forward-chaining partial-order planner. The search is guided by @math , a novel heuristic function that is based on the concepts of Domain Transition Graph and frontier state and is optimized to evaluate plans in distributed environments. Agents in FMAP apply an advanced privacy model that allows them to adequately keep private information while communicating only the data of the refinement plans that is relevant to each of the participating agents. Experimental results show that FMAP is a general-purpose approach that efficiently solves tightly-coupled domains that have specialized agents and cooperative goals as well as loosely-coupled problems. Specifically, the empirical evaluation shows that FMAP outperforms current MAP systems at solving complex planning tasks that are adapted from the International Planning Competition benchmarks.
Finally, MAPR is a recent planner that performs goal allocation to each agent @cite_20 . Agents iteratively solve the assigned goals by extending the plan of the previous agent. In this approach, agents work under limited knowledge of the environment by obfuscating the private information in their plans. MAPR is particularly effective for loosely-coupled problems, but it cannot deal with tasks that feature specialized agents and cooperative goals since it assumes that each goal is achieved by a single agent. Section will show a comparative performance evaluation between MAPR and FMAP, our proposed approach.
{ "cite_N": [ "@cite_20" ], "mid": [ "2394593764" ], "abstract": [ "Generating plans for a single agent has been shown to be a difficult task. If we generalize to a multi-agent setting, the problem becomes exponentially harder in general. The centralized approach where a plan is jointly generated for all agents is only possible in some applications when agents do not have private goals, actions or states. We describe in this paper an alternative approach, MAPR (Multi-Agent Planning by plan Reuse), that considers both the agents private and public information. We have been inspired by iterative Multi-Agent Planning (MAP) techniques as the one presented in [1]. MAPR first assigns a subset of public goals to each agent, while each agent might have a set of private goals also. Then, MAPR calls the first agent to provide a solution (plan) that takes into account its private and public goals. MAPR iteratively calls each agent with the solutions provided by previous agents. Each agent receives its own goals plus the goals of the previous agents. Thus, each agent solves its problem, but taking into account the previous agents solutions. Since previous solutions might consider private data, all private information from an agent is obfuscated for the next ones. Since each agent receives the plan from the previous agent that implicitly considers the solutions to all previous agents, instead of starting the search from scratch, it can also reuse the previous whole plan or only a subset of the actions. Experiments show that MAPR outperforms in several orders of magnitude state-of-the-art techniques in the tested domains." ] }
1501.07467
2116923040
User engagement refers to the amount of interaction an instance (e.g., tweet, news, and forum post) achieves. Ranking the items in social media websites based on the amount of user participation in them, can be used in different applications, such as recommender systems. In this paper, we consider a tweet containing a rating for a movie as an instance and focus on ranking the instances of each user based on their engagement, i.e., the total number of retweets and favorites it will gain. For this task, we define several features which can be extracted from the meta-data of each tweet. The features are partitioned into three categories: user-based, movie-based, and tweet-based. We show that in order to obtain good results, features from all categories should be considered. We exploit regression and learning to rank methods to rank the tweets and propose to aggregate the results of regression and learning to rank methods to achieve better performance. We have run our experiments on an extended version of MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show that learning to rank approach outperforms most of the regression models and the combination can improve the performance significantly.
To address the problem of engagement prediction, several features have been proposed for training a model. @cite_15 have provided an analysis on the factors impacting the number of retweets. They have concluded that hashtags, number of followers, number of followees, and the account age play important roles in increasing the probability of the tweets to be retweeted. @cite_2 have trained a probabilistic collaborative filtering model to predict the future retweets using the history of the previous ones.
{ "cite_N": [ "@cite_15", "@cite_2" ], "mid": [ "2026318959", "170896097" ], "abstract": [ "Retweeting is the key mechanism for information diffusion in Twitter. It emerged as a simple yet powerful way of disseminating information in the Twitter social network. Even though a lot of information is shared in Twitter, little is known yet about how and why certain information spreads more widely than others. In this paper, we examine a number of features that might affect retweetability of tweets. We gathered content and contextual features from 74M tweets and used this data set to identify factors that are significantly associated with retweet rate. We also built a predictive retweet model. We found that, amongst content features, URLs and hashtags have strong relationships with retweetability. Amongst contextual features, the number of followers and followees as well as the age of the account seem to affect retweetability, while, interestingly, the number of past tweets does not predict retweetability of a user's tweet. We believe that this research would inform the design of sensemaking and analytics tools for social media streams.", "We present a new methodology for predicting the spread of information in a social network. We focus on the Twitter network, where information is in the form of 140 character messages called tweets, and information is spread by users forwarding tweets, a practice known as retweeting. Using data of who and what was retweeted, we train a probabilistic collaborative filter model to predict future retweets. We find that the most important features for prediction are the identity of the source of the tweet and retweeter. Our methodology is quite flexible and be used as a basis for other prediction models in social networks." ] }
1501.07467
2116923040
User engagement refers to the amount of interaction an instance (e.g., tweet, news, and forum post) achieves. Ranking the items in social media websites based on the amount of user participation in them, can be used in different applications, such as recommender systems. In this paper, we consider a tweet containing a rating for a movie as an instance and focus on ranking the instances of each user based on their engagement, i.e., the total number of retweets and favorites it will gain. For this task, we define several features which can be extracted from the meta-data of each tweet. The features are partitioned into three categories: user-based, movie-based, and tweet-based. We show that in order to obtain good results, features from all categories should be considered. We exploit regression and learning to rank methods to rank the tweets and propose to aggregate the results of regression and learning to rank methods to achieve better performance. We have run our experiments on an extended version of MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show that learning to rank approach outperforms most of the regression models and the combination can improve the performance significantly.
Linear models have been used in some other studies to predict the popularity of videos on YouTube by observing their popularity after regular periods @cite_23 . @cite_17 have proposed a passive-aggressive algorithm to predict whether a tweet will be retweeted or not.
{ "cite_N": [ "@cite_23", "@cite_17" ], "mid": [ "2070366435", "2265862919" ], "abstract": [ "We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.", "Twitter is a very popular way for people to share information on a bewildering multitude of topics. Tweets are propagated using a variety of channels: by following users or lists, by searching or by retweeting. Of these vectors, retweeting is arguably the most effective, as it can potentially reach the most people, given its viral nature. A key task is predicting if a tweet will be retweeted, and solving this problem furthers our understanding of message propagation within large user communities. We carry out a human experiment on the task of deciding whether a tweet will be retweeted which shows that the task is possible, as human performance levels are much above chance. Using a machine learning approach based on the passive-aggressive algorithm, we are able to automatically predict retweets as well as humans. Analyzing the learned model, we find that performance is dominated by social features, but that tweet features add a substantial boost." ] }
1501.07467
2116923040
User engagement refers to the amount of interaction an instance (e.g., tweet, news, and forum post) achieves. Ranking the items in social media websites based on the amount of user participation in them, can be used in different applications, such as recommender systems. In this paper, we consider a tweet containing a rating for a movie as an instance and focus on ranking the instances of each user based on their engagement, i.e., the total number of retweets and favorites it will gain. For this task, we define several features which can be extracted from the meta-data of each tweet. The features are partitioned into three categories: user-based, movie-based, and tweet-based. We show that in order to obtain good results, features from all categories should be considered. We exploit regression and learning to rank methods to rank the tweets and propose to aggregate the results of regression and learning to rank methods to achieve better performance. We have run our experiments on an extended version of MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show that learning to rank approach outperforms most of the regression models and the combination can improve the performance significantly.
Recognizing popular messages is also one of the similar problems which is used for breaking news detection and personalized tweet content recommendation. @cite_1 have formulated this task as a classification problem by exploiting content-based features, temporal information, meta-data of messages, and the users social graph.
{ "cite_N": [ "@cite_1" ], "mid": [ "2127267264" ], "abstract": [ "Social network services have become a viable source of information for users. In Twitter, information deemed important by the community propagates through retweets. Studying the characteristics of such popular messages is important for a number of tasks, such as breaking news detection, personalized message recommendation, viral marketing and others. This paper investigates the problem of predicting the popularity of messages as measured by the number of future retweets and sheds some light on what kinds of factors influence information propagation in Twitter. We formulate the task into a classification problem and study two of its variants by investigating a wide spectrum of features based on the content of the messages, temporal information, metadata of messages and users, as well as structural properties of the users' social graph on a large scale dataset. We show that our method can successfully predict messages which will attract thousands of retweets with good performance." ] }
1501.07467
2116923040
User engagement refers to the amount of interaction an instance (e.g., tweet, news, and forum post) achieves. Ranking the items in social media websites based on the amount of user participation in them, can be used in different applications, such as recommender systems. In this paper, we consider a tweet containing a rating for a movie as an instance and focus on ranking the instances of each user based on their engagement, i.e., the total number of retweets and favorites it will gain. For this task, we define several features which can be extracted from the meta-data of each tweet. The features are partitioned into three categories: user-based, movie-based, and tweet-based. We show that in order to obtain good results, features from all categories should be considered. We exploit regression and learning to rank methods to rank the tweets and propose to aggregate the results of regression and learning to rank methods to achieve better performance. We have run our experiments on an extended version of MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show that learning to rank approach outperforms most of the regression models and the combination can improve the performance significantly.
Predicting the extent to which a news is going to be breaking or how many comments a news is going to gain is one of the engagement prediction problems. @cite_20 have analyzed a news dataset to address this problem. They have focused on sorting the articles based on their future popularity and they have proposed to use linear regression for this task.
{ "cite_N": [ "@cite_20" ], "mid": [ "2056832611" ], "abstract": [ "News articles are a captivating type of online content that capture a significant amount of Internet users' interest. They are particularly consumed by mobile users and extremely diffused through online social platforms. As a result, there is an increased interest in promptly identifying the articles that will receive a significant amount of user attention. This task falls under the broad scope of content popularity prediction and has direct implications in various contexts such as caching strategies or online advertisement policies. In this paper we address the problem of predicting the popularity of news articles based on user comments. We formulate the prediction task into a ranking problem where the goal is not to infer the precise attention that a content will receive but to accurately rank articles based on their predicted popularity. To this end, we analyze the ranking performance of three prediction models using a dataset of articles covering a four-year period and published by 20minutes.fr, an important French online news platform. Our results indicate that prediction methods improve the ranking performance and we observed that for our dataset a simple linear prediction method outperforms more dedicated prediction methods." ] }
1501.07467
2116923040
User engagement refers to the amount of interaction an instance (e.g., tweet, news, and forum post) achieves. Ranking the items in social media websites based on the amount of user participation in them, can be used in different applications, such as recommender systems. In this paper, we consider a tweet containing a rating for a movie as an instance and focus on ranking the instances of each user based on their engagement, i.e., the total number of retweets and favorites it will gain. For this task, we define several features which can be extracted from the meta-data of each tweet. The features are partitioned into three categories: user-based, movie-based, and tweet-based. We show that in order to obtain good results, features from all categories should be considered. We exploit regression and learning to rank methods to rank the tweets and propose to aggregate the results of regression and learning to rank methods to achieve better performance. We have run our experiments on an extended version of MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show that learning to rank approach outperforms most of the regression models and the combination can improve the performance significantly.
It is worth noting that ranking instances is one of the problems which has been extensively studied in information retrieval, natural language processing, and machine learning fields @cite_5 . To solve a similar problem, Uysal and Croft @cite_29 have proposed Coordinate Ascent learning to rank" algorithm to rank tweets for a user in a way that tweets which are more likely to be retweeted come on top. They have also worked on ranking users for a tweet in a way that the higher the rank, the more likely the given tweet will be retweeted. Several learning to rank algorithms have been proposed in the literature. Moreover, there are some supervised and unsupervised ensemble methods to aggregate different rankings, such as Borda Count @cite_12 and Cranking @cite_18 . Previous studies show that in many cases, ranking aggregation methods outperform single ranking methods @cite_32 @cite_5 .
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_32", "@cite_5", "@cite_12" ], "mid": [ "1550469973", "2145131124", "2784672094", "2009077327", "1988686126" ], "abstract": [ "A new approach to ensemble learning is introduced that takes ranking rather than classification as fundamental, leading to models on the symmetric group and its cosets. The approach uses a generalization of the Mallows model on permutations to combine multiple input rankings. Applications include the task of combining the output of multiple search engines and multiclass or multilabel classification, where a set of input classifiers is viewed as generating a ranking of class labels. Experiments for both types of applications are presented.", "The increasing volume of streaming data on microblogs has re-introduced the necessity of effective filtering mechanisms for such media. Microblog users are overwhelmed with mostly uninteresting pieces of text in order to access information of value. In this paper, we propose a personalized tweet ranking method, leveraging the use of retweet behavior, to bring more important tweets forward. In addition, we also investigate how to determine the audience of tweets more effectively, by ranking the users based on their likelihood of retweeting the tweets. Finally, conducting a pilot user study, we analyze how retweet likelihood correlates with the interestingness of the tweets.", "Learning to rank for information retrieval has gained a lot of interest in the recent years but there is a lack for large real-world datasets to benchmark algorithms. That led us to publicly release two datasets used internally at Yahoo! for learning the web search ranking function. To promote these datasets and foster the development of state-of-the-art learning to rank algorithms, we organized the Yahoo! Learning to Rank Challenge in spring 2010. This paper provides an overview and an analysis of this challenge, along with a detailed description of the released datasets.", "Learning to rank refers to machine learning techniques for training the model in a ranking task. Learning to rank is useful for many applications in information retrieval, natural language processing, and data mining. Intensive studies have been conducted on the problem recently and significant progress has been made. This lecture gives an introduction to the area including the fundamental problems, existing approaches, theories, applications, and future work. The author begins by showing that various ranking problems in information retrieval and natural language processing can be formalized as two basic ranking tasks, namely ranking creation (or simply ranking) and ranking aggregation. In ranking creation, given a request, one wants to generate a ranking list of offerings based on the features derived from the request and the offerings. In ranking aggregation, given a request, as well as a number of ranking lists of offerings, one wants to generate a new ranking list of the offerings. Ranking creation (or ranking) is the major problem in learning to rank. It is usually formalized as a supervised learning task. The author gives detailed explanations on learning for ranking creation and ranking aggregation, including training and testing, evaluation, feature creation, and major approaches. Many methods have been proposed for ranking creation. The methods can be categorized as the pointwise, pairwise, and listwise approaches according to the loss functions they employ. They can also be categorized according to the techniques they employ, such as the SVM based, Boosting SVM, Neural Network based approaches. The author also introduces some popular learning to rank methods in details. These include PRank, OC SVM, Ranking SVM, IR SVM, GBRank, RankNet, LambdaRank, ListNet & ListMLE, AdaRank, SVM MAP, SoftRank, Borda Count, Markov Chain, and CRanking. The author explains several example applications of learning to rank including web search, collaborative filtering, definition search, keyphrase extraction, query dependent summarization, and re-ranking in machine translation. A formulation of learning for ranking creation is given in the statistical learning framework. Ongoing and future research directions for learning to rank are also discussed. Table of Contents: Introduction Learning for Ranking Creation Learning for Ranking Aggregation Methods of Learning to Rank Applications of Learning to Rank Theory of Learning to Rank Ongoing and Future Work", "Given the ranked lists of documents returned by multiple search engines in response to a given query, the problem of metasearch is to combine these lists in a way which optimizes the performance of the combination. This paper makes three contributions to the problem of metasearch: (1) We describe and investigate a metasearch model based on an optimal democratic voting procedure, the Borda Count; (2) we describe and investigate a metasearch model based on Bayesian inference; and (3) we describe and investigate a model for obtaining upper bounds on the performance of metasearch algorithms. Our experimental results show that metasearch algorithms based on the Borda and Bayesian models usually outperform the best input system and are competitive with, and often outperform, existing metasearch strategies. Finally, our initial upper bounds demonstrate that there is much to learn about the limits of the performance of metasearch." ] }
1501.07174
2952721695
Constraint solution reuse is an effective approach to save the time of constraint solving in symbolic execution. Most of the existing reuse approaches are based on syntactic or semantic equivalence of constraints; e.g. the Green framework is able to reuse constraints which have different representations but are semantically equivalent, through canonizing constraints into syntactically equivalent normal forms. However, syntactic semantic equivalence is not a necessary condition for reuse--some constraints are not syntactically or semantically equivalent, but their solutions still have potential for reuse. Existing approaches are unable to recognize and reuse such constraints. In this paper, we present GreenTrie, an extension to the Green framework, which supports constraint reuse based on the logical implication relations among constraints. GreenTrie provides a component, called L-Trie, which stores constraints and solutions into tries, indexed by an implication partial order graph of constraints. L-Trie is able to carry out logical reduction and logical subset and superset querying for given constraints, to check for reuse of previously solved constraints. We report the results of an experimental assessment of GreenTrie against the original Green framework, which shows that our extension achieves better reuse of constraint solving result and saves significant symbolic execution time.
@cite_4 caches the symbolic execution tree into a trie, which records the constraint solving result for every branch and reuses them in new runs. When applied to regression analysis, this allows exploration of portions of the program paths to be skipped, instead of skipping calls to the solver. GreenTrie and Green could work together with this approach to provide further reuse across runs and programs and get better reuse even when the constraints are not same.
{ "cite_N": [ "@cite_4" ], "mid": [ "2066353350" ], "abstract": [ "This paper introduces memoized symbolic execution (Memoise), a new approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype implementation of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage using heuristics." ] }
1501.07174
2952721695
Constraint solution reuse is an effective approach to save the time of constraint solving in symbolic execution. Most of the existing reuse approaches are based on syntactic or semantic equivalence of constraints; e.g. the Green framework is able to reuse constraints which have different representations but are semantically equivalent, through canonizing constraints into syntactically equivalent normal forms. However, syntactic semantic equivalence is not a necessary condition for reuse--some constraints are not syntactically or semantically equivalent, but their solutions still have potential for reuse. Existing approaches are unable to recognize and reuse such constraints. In this paper, we present GreenTrie, an extension to the Green framework, which supports constraint reuse based on the logical implication relations among constraints. GreenTrie provides a component, called L-Trie, which stores constraints and solutions into tries, indexed by an implication partial order graph of constraints. L-Trie is able to carry out logical reduction and logical subset and superset querying for given constraints, to check for reuse of previously solved constraints. We report the results of an experimental assessment of GreenTrie against the original Green framework, which shows that our extension achieves better reuse of constraint solving result and saves significant symbolic execution time.
Reducing the constraint into a short one is a popular optimization approach of SAT SMT solvers and symbolic executors @cite_8 @cite_24 @cite_20 . For example, KLEE @cite_8 does some constraint reductions before solving: (1): These are classical techniques used by optimizing compilers: e.g., simple arithmetic simplifications (x + 0 @math x), strength reduction ( @math , where < < is the bit shift operator), linear simplification (2 * x - x @math x). (2) : KLEE actively simplifies the constraint set when new equality constraints are added to the constraint set by substituting the value of variables into the constraints. For example, if constraint x < 10 is followed by a constraint x = 5, then the first constraint will be simplified to true and be eliminated by KLEE. (3): KLEE uses the concrete value of a variable to possibly simplify subsequent constraints by substituting the variable's concrete value. (4) . KLEE divides constraint sets into disjoint independent subsets based on the symbolic variables they reference. By explicitly tracking these subsets, KLEE can frequently eliminate irrelevant constraints prior to sending a query to the constraint solver.
{ "cite_N": [ "@cite_24", "@cite_20", "@cite_8" ], "mid": [ "2009489720", "2132897303", "1710734607" ], "abstract": [ "In unit testing, a program is decomposed into units which are collections of functions. A part of unit can be tested by generating inputs for a single entry function. The entry function may contain pointer arguments, in which case the inputs to the unit are memory graphs. The paper addresses the problem of automating unit testing with memory graphs as inputs. The approach used builds on previous work combining symbolic and concrete execution, and more specifically, using such a combination to generate test inputs to explore all feasible execution paths. The current work develops a method to represent and track constraints that capture the behavior of a symbolic execution of a unit with memory graphs as inputs. Moreover, an efficient constraint solver is proposed to facilitate incremental generation of such test inputs. Finally, CUTE, a tool implementing the method is described together with the results of applying CUTE to real-world examples of C code.", "This paper presents EXE, an effective bug-finding tool that automatically generates inputs that crash real code. Instead of running code on manually or randomly constructed input, EXE runs it on symbolic input initially allowed to be \"anything.\" As checked code runs, EXE tracks the constraints on each symbolic (i.e., input-derived) memory location. If a statement uses a symbolic value, EXE does not run it, but instead adds it as an input-constraint; all other statements run as usual. If code conditionally checks a symbolic expression, EXE forks execution, constraining the expression to be true on the true branch and false on the other. Because EXE reasons about all possible values on a path, it has much more power than a traditional runtime tool: (1) it can force execution down any feasible program path and (2) at dangerous operations (e.g., a pointer dereference), it detects if the current path constraints allow any value that causes a bug.When a path terminates or hits a bug, EXE automatically generates a test case by solving the current path constraints to find concrete values using its own co-designed constraint solver, STP. Because EXE's constraints have no approximations, feeding this concrete input to an uninstrumented version of the checked code will cause it to follow the same path and hit the same bug (assuming deterministic code).EXE works well on real code, finding bugs along with inputs that trigger them in: the BSD and Linux packet filter implementations, the udhcpd DHCP server, the pcre regular expression library, and three Linux file systems.", "We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90 per tool (median: over 94 ) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100 coverage on 31 of them. We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies." ] }
1501.07379
1589713412
Efficient embedding virtual clusters in physical network is a challenging problem. In this paper we consider a scenario where physical network has a structure of a balanced tree. This assumption is justified by many real- world implementations of datacenters. We consider an extension to virtual cluster embedding by introducing replication among data chunks. In many real-world applications, data is stored in distributed and redundant way. This assumption introduces additional hardness in deciding what replica to process. By reduction from classical NP-complete problem of Boolean Satisfia- bility, we show limits of optimality of embedding. Our result holds even in trees of edge height bounded by three. Also, we show that limiting repli- cation factor to two replicas per chunk type does not make the problem simpler.
There has recently been much interest in programming models and distributed system architectures for the processing and analysis of big data (e.g. @cite_15 @cite_7 @cite_8 ). The model studied in this paper is motivated by MapReduce @cite_7 like batch-processing applications, also known from the popular open-source implementation . These applications generate large amounts of network traffic @cite_0 @cite_9 @cite_13 , and over the last years, several systems have been proposed which provide a provable network performance, also in shared cloud environments, by supporting relative @cite_10 @cite_5 @cite_14 or, as in the case of our paper, @cite_2 @cite_4 @cite_16 @cite_12 @cite_6 bandwidth reservations between the virtual machines.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_8", "@cite_10", "@cite_9", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "1516717550", "2097882016", "2173213060", "2952652959", "2154203494", "2157790661", "2035536069", "", "2112486185", "1976511505", "", "2102248411", "", "2273138441" ], "abstract": [ "While today's virtual datacenters have hypervisor based mechanisms to partition compute resources between the tenants co-located on an end host, they provide little control over how tenants shore the network. This opens cloud applications to interference from other tenants, resulting in unpredictable performance and exposure to denial of service attacks. This paper explores the design space for achieving performance isolation between tenants. We find that existing schemes for enterprise datacenters suffer from at least one of these problems: they cannot keep up with the numbers of tenants and the VM churn observed in cloud datacenters; they impose static bandwidth limits to obtain isolation at the cost of network utilization; they require switch and or NIC modifications; they cannot tolerate malicious tenants and compromised hypervisors. We propose Seawall, an edge-based solution, that achieves max-min fairness across tenant VMs by sending traffic through congestion-controlled, hypervisor-to-hypervisor tunnels.", "In this paper, we propose virtual data center (VDC) as the unit of resource allocation for multiple tenants in the cloud. VDCs are more desirable than physical data centers because the resources allocated to VDCs can be rapidly adjusted as tenants' needs change. To enable the VDC abstraction, we design a data center network virtualization architecture called SecondNet. SecondNet achieves scalability by distributing all the virtual-to-physical mapping, routing, and bandwidth reservation state in server hypervisors. Its port-switching based source routing (PSSR) further makes SecondNet applicable to arbitrary network topologies using commodity servers and switches. SecondNet introduces a centralized VDC allocation algorithm for bandwidth guaranteed virtual to physical mapping. Simulations demonstrate that our VDC allocation achieves high network utilization and low time complexity. Our implementation and experiments show that we can build SecondNet on top of various network topologies, and SecondNet provides bandwidth guarantee and elasticity, as designed.", "MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.", "Shark is a new data analysis system that marries query processing with complex analytics on large clusters. It leverages a novel distributed memory abstraction to provide a unified engine that can run SQL queries and sophisticated analytics functions (e.g., iterative machine learning) at scale, and efficiently recovers from failures mid-query. This allows Shark to run SQL queries up to 100x faster than Apache Hive, and machine learning programs up to 100x faster than Hadoop. Unlike previous systems, Shark shows that it is possible to achieve these speedups while retaining a MapReduce-like execution engine, and the fine-grained fault tolerance properties that such engines provide. It extends such an engine in several ways, including column-oriented in-memory storage and dynamic mid-query replanning, to effectively execute SQL. The result is a system that matches the speedups reported for MPP analytic databases over MapReduce, while offering fault tolerance properties and complex analytics capabilities that they lack.", "The network is a crucial resource in cloud computing, but in contrast to other resources such as CPU or memory, the network is currently shared in a best effort manner. However, sharing the network in a datacenter is more challenging than sharing the other resources. The key difficulty is that the network allocation for a VM X depends not only on the VMs running on the same machine with X, but also on the other VMs that X communicates with, as well as on the cross-traffic on each link used by X. In this paper, we first propose a set of desirable properties for allocating the network bandwidth in a datacenter at the VM granularity, and show that there exists a fundamental tradeoff between the ability to share congested links in proportion to payment and the ability to provide minimal bandwidth guarantees to VMs. Second, we show that the existing allocation models violate one or more of these properties, and propose a mechanism that can select different points in the aforementioned tradeoff between payment proportionality and bandwidth guarantees.", "Infrastructure-as-a-Service (\"Cloud\") data-centers intrinsically depend on high-performance networks to connect servers within the data-center and to the rest of the world. Cloud providers typically offer different service levels, and associated prices, for different sizes of virtual machine, memory, and disk storage. However, while all cloud providers provide network connectivity to tenant VMs, they seldom make any promises about network performance, and so cloud tenants suffer from highly-variable, unpredictable network performance. Many cloud customers do want to be able to rely on network performance guarantees, and many cloud providers would like to offer (and charge for) these guarantees. But nobody really agrees on how to define these guarantees, and it turns out to be challenging to define \"network performance\" in a way that is useful to both customers and providers. We attempt to bring some clarity to this question.", "In multi-tenant datacenters, jobs of different tenants compete for the shared datacenter network and can suffer poor performance and high cost from varying, unpredictable network performance. Recently, several virtual network abstractions have been proposed to provide explicit APIs for tenant jobs to specify and reserve virtual clusters (VC) with both explicit VMs and required network bandwidth between the VMs. However, all of the existing proposals reserve a fixed bandwidth throughout the entire execution of a job. In the paper, we first profile the traffic patterns of several popular cloud applications, and find that they generate substantial traffic during only 30 -60 of the entire execution, suggesting existing simple VC models waste precious networking resources. We then propose a fine-grained virtual network abstraction, Time-Interleaved Virtual Clusters (TIVC), that models the time-varying nature of the networking requirement of cloud applications. To demonstrate the effectiveness of TIVC, we develop Proteus, a system that implements the new abstraction. Using large-scale simulations of cloud application workloads and prototype implementation running actual cloud applications, we show the new abstraction significantly increases the utilization of the entire datacenter and reduces the cost to the tenants, compared to previous fixed-bandwidth abstractions.", "", "The shared nature of the network in today's multi-tenant datacenters implies that network performance for tenants can vary significantly. This applies to both production datacenters and cloud environments. Network performance variability hurts application performance which makes tenant costs unpredictable and causes provider revenue loss. Motivated by these factors, this paper makes the case for extending the tenant-provider interface to explicitly account for the network. We argue this can be achieved by providing tenants with a virtual network connecting their compute instances. To this effect, the key contribution of this paper is the design of virtual network abstractions that capture the trade-off between the performance guarantees offered to tenants, their costs and the provider revenue. To illustrate the feasibility of virtual networks, we develop Oktopus, a system that implements the proposed abstractions. Using realistic, large-scale simulations and an Oktopus deployment on a 25-node two-tier testbed, we demonstrate that the use of virtual networks yields significantly better and more predictable tenant performance. Further, using a simple pricing model, we find that the our abstractions can reduce tenant costs by up to 74 while maintaining provider revenue neutrality.", "While cloud computing providers offer guaranteed allocations for resources such as CPU and memory, they do not offer any guarantees for network resources. The lack of network guarantees prevents tenants from predicting lower bounds on the performance of their applications. The research community has recognized this limitation but, unfortunately, prior solutions have significant limitations: either they are inefficient, because they are not work-conserving, or they are impractical, because they require expensive switch support or congestion-free network cores. In this paper, we propose ElasticSwitch, an efficient and practical approach for providing bandwidth guarantees. ElasticSwitch is efficient because it utilizes the spare bandwidth from unreserved capacity or underutilized reservations. ElasticSwitch is practical because it can be fully implemented in hypervisors, without requiring a specific topology or any support from switches. Because hypervisors operate mostly independently, there is no need for complex coordination between them or with a central controller. Our experiments, with a prototype implementation on a 100-server testbed, demonstrate that ElasticSwitch provides bandwidth guarantees and is work-conserving, even in challenging situations.", "", "Today's cloud-based services integrate globally distributed resources into seamless computing platforms. Provisioning and accounting for the resource usage of these Internet-scale applications presents a challenging technical problem. This paper presents the design and implementation of distributed rate limiters, which work together to enforce a global rate limit across traffic aggregates at multiple sites, enabling the coordinated policing of a cloud-based service's network traffic. Our abstraction not only enforces a global limit, but also ensures that congestion-responsive transport-layer flows behave as if they traversed a single, shared limiter. We present two designs - one general purpose, and one optimized for TCP - that allow service operators to explicitly trade off between communication costs and system accuracy, efficiency, and scalability. Both designs are capable of rate limiting thousands of flows with negligible overhead (less than 3 in the tested configuration). We demonstrate that our TCP-centric design is scalable to hundreds of nodes while robust to both loss and communication delay, making it practical for deployment in nationwide service providers.", "", "Cloud environments should provide network performance isolation for co-located untrusted tenants in a virtualized datacenter. We present key properties that a performance isolation solution should satisfy, and present our progress on Gatekeeper, a system designed to meet these requirements. Experiments on our Xen-based implementation of Gatekeeper in a datacenter cluster demonstrate effective and flexible control of ingress egress link bandwidth for tenant virtual machines under both TCP and greedy unresponsive UDP traffic." ] }
1501.07379
1589713412
Efficient embedding virtual clusters in physical network is a challenging problem. In this paper we consider a scenario where physical network has a structure of a balanced tree. This assumption is justified by many real- world implementations of datacenters. We consider an extension to virtual cluster embedding by introducing replication among data chunks. In many real-world applications, data is stored in distributed and redundant way. This assumption introduces additional hardness in deciding what replica to process. By reduction from classical NP-complete problem of Boolean Satisfia- bility, we show limits of optimality of embedding. Our result holds even in trees of edge height bounded by three. Also, we show that limiting repli- cation factor to two replicas per chunk type does not make the problem simpler.
The most popular virtual network abstraction for batch-processing applications today is the , introduced in the Oktopus paper @cite_2 , and later studied by many others @cite_9 @cite_6 . Several heuristics have been developed to compute good'' embeddings of virtual clusters: embeddings with small footprints (minimal bandwidth reservation costs) @cite_2 @cite_9 @cite_6 . The virtual network embedding problem has also been studied for more general graph abstractions (e.g., motivated by wide-area networks). @cite_11 @cite_3
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_6", "@cite_2", "@cite_11" ], "mid": [ "2157790661", "2132238781", "2035536069", "2112486185", "2060898162" ], "abstract": [ "Infrastructure-as-a-Service (\"Cloud\") data-centers intrinsically depend on high-performance networks to connect servers within the data-center and to the rest of the world. Cloud providers typically offer different service levels, and associated prices, for different sizes of virtual machine, memory, and disk storage. However, while all cloud providers provide network connectivity to tenant VMs, they seldom make any promises about network performance, and so cloud tenants suffer from highly-variable, unpredictable network performance. Many cloud customers do want to be able to rely on network performance guarantees, and many cloud providers would like to offer (and charge for) these guarantees. But nobody really agrees on how to define these guarantees, and it turns out to be challenging to define \"network performance\" in a way that is useful to both customers and providers. We attempt to bring some clarity to this question.", "Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change. Application of this technology relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as \"Virtual Network Embedding (VNE)\" algorithms. This paper presents a survey of current research in the VNE area. Based upon a novel classification scheme for VNE algorithms a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed.", "In multi-tenant datacenters, jobs of different tenants compete for the shared datacenter network and can suffer poor performance and high cost from varying, unpredictable network performance. Recently, several virtual network abstractions have been proposed to provide explicit APIs for tenant jobs to specify and reserve virtual clusters (VC) with both explicit VMs and required network bandwidth between the VMs. However, all of the existing proposals reserve a fixed bandwidth throughout the entire execution of a job. In the paper, we first profile the traffic patterns of several popular cloud applications, and find that they generate substantial traffic during only 30 -60 of the entire execution, suggesting existing simple VC models waste precious networking resources. We then propose a fine-grained virtual network abstraction, Time-Interleaved Virtual Clusters (TIVC), that models the time-varying nature of the networking requirement of cloud applications. To demonstrate the effectiveness of TIVC, we develop Proteus, a system that implements the new abstraction. Using large-scale simulations of cloud application workloads and prototype implementation running actual cloud applications, we show the new abstraction significantly increases the utilization of the entire datacenter and reduces the cost to the tenants, compared to previous fixed-bandwidth abstractions.", "The shared nature of the network in today's multi-tenant datacenters implies that network performance for tenants can vary significantly. This applies to both production datacenters and cloud environments. Network performance variability hurts application performance which makes tenant costs unpredictable and causes provider revenue loss. Motivated by these factors, this paper makes the case for extending the tenant-provider interface to explicitly account for the network. We argue this can be achieved by providing tenants with a virtual network connecting their compute instances. To this effect, the key contribution of this paper is the design of virtual network abstractions that capture the trade-off between the performance guarantees offered to tenants, their costs and the provider revenue. To illustrate the feasibility of virtual networks, we develop Oktopus, a system that implements the proposed abstractions. Using realistic, large-scale simulations and an Oktopus deployment on a 25-node two-tier testbed, we demonstrate that the use of virtual networks yields significantly better and more predictable tenant performance. Further, using a simple pricing model, we find that the our abstractions can reduce tenant costs by up to 74 while maintaining provider revenue neutrality.", "Due to the existence of multiple stakeholders with conflicting goals and policies, alterations to the existing Internet architecture are now limited to simple incremental updates; deployment of any new, radically different technology is next to impossible. To fend off this ossification, network virtualization has been propounded as a diversifying attribute of the future inter-networking paradigm. By introducing a plurality of heterogeneous network architectures cohabiting on a shared physical substrate, network virtualization promotes innovations and diversified applications. In this paper, we survey the existing technologies and a wide array of past and state-of-the-art projects on network virtualization followed by a discussion of major challenges in this area." ] }
1501.07338
2950858732
We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.
Efforts on speeding up CNN by vectorization starts with its inception. Specialized CNN chip @cite_2 was built and successfully applied to handwriting recognition in the early 90s. simplified CNN by fusing convolution and pooling operations. This speeded up the network and performed well in document analysis. adopted the same architecture but unrolled the convolution operation into a matrix-matrix product. It has now been proven that this vectorization approach works particularly well with modern GPUs. However, limited by the available computing power, the scale of the CNN explored at that time was much smaller than modern deep CNNs.
{ "cite_N": [ "@cite_2" ], "mid": [ "1530262073" ], "abstract": [ "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup." ] }
1501.07338
2950858732
We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.
When deep architecture showed its ability to effectively learn highly complex functions @cite_3 , scaling up neural network based models was soon becoming one of the major tasks in deep learning @cite_0 . Vectorization played an important role in achieving this goal. Scaling up CNN by vectorized GPU implementations such as Caffe @cite_13 , Overfeat @cite_16 , CudaConvnet @cite_8 and Theano @cite_4 generates state-of-the-art results on many vision tasks. Albeit the good performance, few of the previous papers elaborated on their vectorization strategies. As a consequence, how vectorization affects design choices in both model training and testing is unclear.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_3", "@cite_0", "@cite_16", "@cite_13" ], "mid": [ "2152175008", "2163605009", "", "", "1487583988", "2950094539" ], "abstract": [ "Theano is a compiler for mathematical expressions in Python that combines the convenience of NumPy's syntax with the speed of optimized native machine language. The user composes mathematical expressions in a high-level description that mimics NumPy's syntax and semantics, while being statically typed and functional (as opposed to imperative). These expressions allow Theano to provide symbolic differentiation. Before performing computation, Theano optimizes the choice of expressions, translates them into C++ (or CUDA for GPU), compiles them into dynamically loaded Python modules, all automatically. Common machine learn- ing algorithms implemented with Theano are from 1:6 to 7:5 faster than competitive alternatives (including those implemented with C C++, NumPy SciPy and MATLAB) when compiled for the CPU and between 6:5 and 44 faster when compiled for the GPU. This paper illustrates how to use Theano, outlines the scope of the compiler, provides benchmarks on both CPU and GPU processors, and explains its overall design.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "", "", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ( @math 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia." ] }
1501.07338
2950858732
We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.
Efforts were also put in the acceleration of a part of the deep CNN from algorithmic aspects, exemplified by the separable kernels for convolution @cite_17 and the FFT speedup @cite_22 . Instead of finding a faster alternative for one specific layer, we focus more on the general vectorization techniques used in all building blocks in deep CNNs, which is instrumental not only in accelerating existing networks, but also in providing guidance for implementing and designing new CNNs across different platforms, for various vision tasks.
{ "cite_N": [ "@cite_22", "@cite_17" ], "mid": [ "1922123711", "2950248853" ], "abstract": [ "Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.", "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the linear structure present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2x, while keeping the accuracy within 1 of the original model." ] }
1501.07258
1533252206
The divisible sandpile starts with i.i.d. random variables (“masses”) at the vertices of an infinite, vertex-transitive graph, and redistributes mass by a local toppling rule in an attempt to make all masses ≤ 1. The process stabilizes almost surely if m 1, where m is the mean mass per vertex. The main result of this paper is that in the critical case m = 1, if the initial masses have finite variance, then the process almost surely does not stabilize. To give quantitative estimates on a finite graph, we relate the number of topplings to a discrete bi-Laplacian Gaussian field.
The divisible sandpile was introduced in @cite_8 @cite_11 to study the scaling limits of two growth models, rotor aggregation and internal DLA. The divisible sandpile has also been used as a device for proving an exact mean value property for discrete harmonic functions [Lemma 2.2] JLS13 . These works focused on sandpiles with finite total mass on an infinite graph, in which case exploding is not a possibility. In the present paper we expand the focus to sandpiles with infinite total mass.
{ "cite_N": [ "@cite_11", "@cite_8" ], "mid": [ "1992413315", "2167628842" ], "abstract": [ "We study the scaling limits of three different aggregation models on ℤd: internal DLA, in which particles perform random walks until reaching an unoccupied site; the rotor-router model, in which particles perform deterministic analogues of random walks; and the divisible sandpile, in which each site distributes its excess mass equally among its neighbors. As the lattice spacing tends to zero, all three models are found to have the same scaling limit, which we describe as the solution to a certain PDE free boundary problem in ℝd. In particular, internal DLA has a deterministic scaling limit. We find that the scaling limits are quadrature domains, which have arisen independently in many fields such as potential theory and fluid dynamics. Our results apply both to the case of multiple point sources and to the Diaconis-Fulton smash sum of domains.", "The rotor-router model is a deterministic analogue of random walk. It can be used to define a deterministic growth model analogous to internal DLA. We prove that the asymptotic shape of this model is a Euclidean ball, in a sense which is stronger than our earlier work (Levine and Peres, Indiana Univ Math J 57(1):431–450, 2008). For the shape consisting of (n= r^d ) sites, where ω d is the volume of the unit ball in ( R ^d ), we show that the inradius of the set of occupied sites is at least r − O(logr), while the outradius is at most r + O(r α ) for any α > 1 − 1 d. For a related model, the divisible sandpile, we show that the domain of occupied sites is a Euclidean ball with error in the radius a constant independent of the total mass. For the classical abelian sandpile model in two dimensions, with n = πr 2 particles, we show that the inradius is at least (r 3 ), and the outradius is at most ((r+o(r)) 2 ). This improves on bounds of Le Borgne and Rossin. Similar bounds apply in higher dimensions, improving on bounds of Fey and Redig." ] }
1501.07258
1533252206
The divisible sandpile starts with i.i.d. random variables (“masses”) at the vertices of an infinite, vertex-transitive graph, and redistributes mass by a local toppling rule in an attempt to make all masses ≤ 1. The process stabilizes almost surely if m 1, where m is the mean mass per vertex. The main result of this paper is that in the critical case m = 1, if the initial masses have finite variance, then the process almost surely does not stabilize. To give quantitative estimates on a finite graph, we relate the number of topplings to a discrete bi-Laplacian Gaussian field.
The abelian sandpile has a much longer history: it arose in statistical physics as a model of self-organized criticality' (SOC) @cite_5 @cite_10 . The dichotomy between stabilizing and exploding configurations arose in the course of a debate about whether SOC does or does not involve tuning a parameter to a critical value @cite_7 @cite_9 . Without reopening that particular debate, we view the stabilizing exploding dichotomy as a topic with its own intrinsic mathematical interest. An example of its importance can be seen in the partial differential equation for the scaling limit of the abelian sandpile on @math , which relies on a classification of certain quadratic' sandpiles according to whether they are stabilizing or exploding @cite_0 .
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_0", "@cite_5", "@cite_10" ], "mid": [ "2165425090", "2096778322", "2595923342", "2075514521", "2037190943" ], "abstract": [ "We define stabilizability of an infinite volume height configuration and of a probability measure on height configurations. We show that for high enough densities, a probability measure cannot be stabilized. We also show that in some sense the thermodynamic limit of the uniform measures on the recurrent configurations of the abelian sandpile model (ASM) is a maximal element of the set of stabilizable measures. In that sense the self-organized critical behavior of the ASM can be understood in terms of an ordinary transition between stabilizable and non-stabilizable. Key-words: Self-organized criticality, abelian sandpile model, activated random walkers, stabilizability. AMS classification: 60K35 (primary), 60G60 (secondary)", "We investigate the nature of the self-organised critical behaviour in the Abelian sandpile model and in the Bak Sneppen evolution model. We claim that in either case, the self-organised critical behaviour can be explained by the careful choice of the details of the model: they are designed in such a way that the models are necessarily attracted to the critical point of a conven- tional parametrised equilibrium system. In the case of the Abelian sandpile we prove this connection to conventional criticality rigorously in one dimension, and provide evidence for a similar result in higher dimensions. In the case of the Bak Sneppen evolution model, we give an overview of the current results, and explain why these results support our claim. We conclude that the term self-organised criticality is somewhat confusing, since the tuning of parameters in a model has been replaced by the careful choice of a suitable model. Viewed as such, we can hardly call this critical behaviour spontaneous.", "", "We show that dynamical systems with spatial degrees of freedom naturally evolve into a self-organized critical point. Flicker noise, or 1 f noise, can be identified with the dynamics of the critical state. This picture also yields insight into the origin of fractal objects.", "We study a general Bak-Tang-Wiesenfeld-type automaton model of self-organized criticality in which the toppling conditions depend on local height, but not on its gradient. We characterize the critical state, and determine its entropy for an arbitrary finite lattice in any dimension. The two-point correlation function is shown to satisfy a linear equation. The spectrum of relaxation times describing the approach to the critical state is also determined exactly." ] }
1501.07258
1533252206
The divisible sandpile starts with i.i.d. random variables (“masses”) at the vertices of an infinite, vertex-transitive graph, and redistributes mass by a local toppling rule in an attempt to make all masses ≤ 1. The process stabilizes almost surely if m 1, where m is the mean mass per vertex. The main result of this paper is that in the critical case m = 1, if the initial masses have finite variance, then the process almost surely does not stabilize. To give quantitative estimates on a finite graph, we relate the number of topplings to a discrete bi-Laplacian Gaussian field.
The Gaussian vector @math in Proposition can be interpreted as a discrete bi-Laplacian field. In @math for dimensions @math , Sun and Wu construct another discrete model for the bi-Laplacian field by assigning random signs to each component of the uniform spanning forest @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "1653417910" ], "abstract": [ "We construct a natural discrete random field on Z d , d � 5 that converges weakly to the bi-Laplacian Gaussian field in the scaling limit. The construction is based on assigning i.i.d. Bernoulli random variables on each component of the uniform spanning forest, thus defines an associated random function. To our knowledge, this is the first natural discrete model (besides the discrete bi-Laplacian Gaussian field) that converges to the bi-Laplacian Gaussian field." ] }
1501.06864
67860792
The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely self-calibration, compressive sensing, and biconvex optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations where both and the diagonal matrix (which models the calibration error) are unknown. By 'lifting' this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both and can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis.
Besides drawing from general ideas in compressive sensing and matrix completion, the research presented in this work is mostly influenced by papers related to PhaseLift @cite_43 @cite_23 @cite_10 and in particular by the very inspiring paper on blind deconvolution by Ahmed, Recht, and Romberg @cite_44 . PhaseLift provides a strategy of how to reconstruct a signal @math from its quadratic measurements @math via lifting'' techniques. Here the key idea is to lift a vector-valued quadratic problem to a matrix-valued linear problem. Specifically, we need to find a rank-one positive semi-definite matrix @math which satisfies linear measurement equations of @math : @math where @math Generally problems of this form are NP-hard. However, solving the following nuclear norm minimization yields exact recovery of @math
{ "cite_N": [ "@cite_44", "@cite_43", "@cite_10", "@cite_23" ], "mid": [ "2140867429", "", "2963855280", "2078397124" ], "abstract": [ "We consider the problem of recovering two unknown vectors, w and x, of length L from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension N and the other with dimension K. Although the observed convolution is nonlinear in both w and x, it is linear in the rank-1 matrix formed by their outer product wx*. This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that, for “generic” signals, the program can deconvolve w and x exactly when the maximum of N and K is almost on the order of L. That is, we show that if x is drawn from a random subspace of dimension N, and w is a vector in a subspace of dimension K whose basis vectors are spread out in the frequency domain, then nuclear norm minimization recovers wx* without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length N, which we code using a random L x N coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length K, then the receiver can recover both the channel response and the message when L ≳ N + K, to within constant and log factors.", "", "In this paper we consider a system of quadratic equations @math , where @math is unknown while normal random vectors @math and quadratic measurements @math are known. The system is assumed to be underdetermined, i.e., @math . We prove that if there exists a sparse solution @math , i.e., at most @math components of @math are nonzero, then by solving a convex optimization program, we can solve for @math up to a multiplicative constant with high probability, provided that @math . On the other hand, we prove that @math is necessary for a class of natural convex relaxations to be exact.", "Suppose we wish to recover a signal amssym @math from m intensity measurements of the form , ; that is, from data in which phase information is missing. We prove that if the vectors are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program–-a trace-norm minimization problem; this holds with large probability provided that m is on the order of , and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis-a-vis additive noise. © 2012 Wiley Periodicals, Inc." ] }
1501.06864
67860792
The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely self-calibration, compressive sensing, and biconvex optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations where both and the diagonal matrix (which models the calibration error) are unknown. By 'lifting' this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both and can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis.
This idea of "Lifting" also applies to blind deconvolution @cite_44 , which refers to the recovery of two unknown signals from their convolution. The model can be converted into the form via applying the Fourier transform and under proper assumptions @math can be recovered exactly by solving a nuclear norm minimization program in the following form, if the number of measurements is at least larger than the dimensions of both @math and @math . Compared with PhaseLift , the difference is that @math can be asymmetric and there is of course no guarantee for positivity. Ahmed, Recht, and Romberg derived a very appealing framework for blind deconvolution via convex programming, and we borrow many of their ideas for our proofs.
{ "cite_N": [ "@cite_44" ], "mid": [ "2140867429" ], "abstract": [ "We consider the problem of recovering two unknown vectors, w and x, of length L from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension N and the other with dimension K. Although the observed convolution is nonlinear in both w and x, it is linear in the rank-1 matrix formed by their outer product wx*. This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that, for “generic” signals, the program can deconvolve w and x exactly when the maximum of N and K is almost on the order of L. That is, we show that if x is drawn from a random subspace of dimension N, and w is a vector in a subspace of dimension K whose basis vectors are spread out in the frequency domain, then nuclear norm minimization recovers wx* without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length N, which we code using a random L x N coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length K, then the receiver can recover both the channel response and the message when L ≳ N + K, to within constant and log factors." ] }
1501.06715
2951323746
Microblogging services like Twitter and Facebook collect millions of user generated content every moment about trending news, occurring events, and so on. Nevertheless, it is really a nightmare to find information of interest through the huge amount of available posts that are often noise and redundant. In general, social media analytics services have caught increasing attention from both side research and industry. Specifically, the dynamic context of microblogging requires to manage not only meaning of information but also the evolution of knowledge over the timeline. This work defines Time Aware Knowledge Extraction (briefly TAKE) methodology that relies on temporal extension of Fuzzy Formal Concept Analysis. In particular, a microblog summarization algorithm has been defined filtering the concepts organized by TAKE in a time-dependent hierarchy. The algorithm addresses topic-based summarization on Twitter. Besides considering the timing of the concepts, another distinguish feature of the proposed microblog summarization framework is the possibility to have more or less detailed summary, according to the user's needs, with good levels of quality and completeness as highlighted in the experimental results.
From the time-dependent document summarization point of view, some existing approaches are aimed to address update summarization task defined in TAC (www.nist.gov tac). Specifically, they emphasize the novelty of the subsequent summary @cite_19 . Unlikely, the proposed approach focuses more on the temporal development of the story (i.e. topic or event) that is stressed by the multitude of the messages posted through microblogging service, i.e. Twitter.
{ "cite_N": [ "@cite_19" ], "mid": [ "2081173909" ], "abstract": [ "The detection of new information in a document stream is an important component of many potential applications. In this work, a new novelty detection approach based on the identification of sentence level information patterns is proposed. First, the information-pattern concept for novelty detection is presented with the emphasis on new information patterns for general topics (queries) that cannot be simply turned into specific questions whose answers are specific named entities (NEs). Then we elaborate a thorough analysis of sentence level information patterns on data from the TREC novelty tracks, including sentence lengths, named entities, sentence level opinion patterns. This analysis provides guidelines in applying those patterns in novelty detection particularly for the general topics. Finally, a unified pattern-based approach is presented to novelty detection for both general and specific topics. The new method for dealing with general topics will be the focus. Experimental results show that the proposed approach significantly improves the performance of novelty detection for general topics as well as the overall performance for all topics from the 2002-2004 TREC novelty tracks." ] }
1501.06715
2951323746
Microblogging services like Twitter and Facebook collect millions of user generated content every moment about trending news, occurring events, and so on. Nevertheless, it is really a nightmare to find information of interest through the huge amount of available posts that are often noise and redundant. In general, social media analytics services have caught increasing attention from both side research and industry. Specifically, the dynamic context of microblogging requires to manage not only meaning of information but also the evolution of knowledge over the timeline. This work defines Time Aware Knowledge Extraction (briefly TAKE) methodology that relies on temporal extension of Fuzzy Formal Concept Analysis. In particular, a microblog summarization algorithm has been defined filtering the concepts organized by TAKE in a time-dependent hierarchy. The algorithm addresses topic-based summarization on Twitter. Besides considering the timing of the concepts, another distinguish feature of the proposed microblog summarization framework is the possibility to have more or less detailed summary, according to the user's needs, with good levels of quality and completeness as highlighted in the experimental results.
Considering our proposal we find some similarities in @cite_3 and in @cite_20 . Specifically, @cite_3 describes a framework for summarizing events from tweet stream. The authors define two topic models, Decay Topic Model (DTM) and Gaussian DTM, to extract summaries from microblog, and they finally argue that these models outperforms LDA (Latent Dirichlet Allocation) baseline that doesn't consider temporal relation among tweets. Instead, the approach used in @cite_20 introduces a sequential summarization for Twitter trending topics exploiting two approaches: a stream based approach that is aimed to extract important subtopic concerning with specific category (e.g., News, Sport, etc.) identifying peak areas according to the timestamps of the tweets; and a semantic based approach leveraging on Dynamic Topic Modeling, that extends LDA in order to consider timeline, to identify topic from a semantic prospective in the time interval. In @cite_20 the authors argue that hybrid approach that considers stream and semantic of the tweets outperforms other ones.
{ "cite_N": [ "@cite_20", "@cite_3" ], "mid": [ "1999415654", "165968472" ], "abstract": [ "As an information delivering platform, Twitter collects millions of tweets every day. However, some users, especially new users, often find it difficult to understand trending topics in Twitter when confronting the overwhelming and unorganized tweets. Existing work has attempted to provide a short snippet to explain a topic, but this only provides limited benefits and cannot satisfy the users' expectations. In this paper, we propose a new summarization task, namely sequential summarization, which aims to provide a serial of chronologically ordered short sub-summaries for a trending topic in order to provide a complete story about the development of the topic while retaining the order of information presentation. Different from the traditional summarization task, the numbers of sub-summaries for different topics are not fixed. Two approaches, i.e., stream-based and semantic-based approaches, are developed to detect the important subtopics within a trending topic. Then a short sub-summary is generated for each subtopic. In addition, we propose three new measures to evaluate the position-aware coverage, sequential novelty and sequence correlation of the system-generated summaries. The experimental results based on the proposed evaluation criteria have demonstrated the effectiveness of the proposed approaches.", "Social media services such as Twitter generate phenomenal volume of content for most real-world events on a daily basis. Digging through the noise and redundancy to understand the important aspects of the content is a very challenging task. We propose a search and summarization framework to extract relevant representative tweets from a time-ordered sample of tweets to generate a coherent and concise summary of an event. We introduce two topic models that take advantage of temporal correlation in the data to extract relevant tweets for summarization. The summarization framework has been evaluated using Twitter data on four real-world events. Evaluations are performed using Wikipedia articles on the events as well as using Amazon Mechanical Turk (MTurk) with human readers (MTurkers). Both experiments show that the proposed models outperform traditional LDA and lead to informative summaries." ] }
1501.06715
2951323746
Microblogging services like Twitter and Facebook collect millions of user generated content every moment about trending news, occurring events, and so on. Nevertheless, it is really a nightmare to find information of interest through the huge amount of available posts that are often noise and redundant. In general, social media analytics services have caught increasing attention from both side research and industry. Specifically, the dynamic context of microblogging requires to manage not only meaning of information but also the evolution of knowledge over the timeline. This work defines Time Aware Knowledge Extraction (briefly TAKE) methodology that relies on temporal extension of Fuzzy Formal Concept Analysis. In particular, a microblog summarization algorithm has been defined filtering the concepts organized by TAKE in a time-dependent hierarchy. The algorithm addresses topic-based summarization on Twitter. Besides considering the timing of the concepts, another distinguish feature of the proposed microblog summarization framework is the possibility to have more or less detailed summary, according to the user's needs, with good levels of quality and completeness as highlighted in the experimental results.
In general, these research works highlight that to achieve microblog summarization, due to the dynamic nature of its content, it is crucial to consider both the chronological order of the posts and their information content. Unlike these microblog summarization approaches that consider the time and meaning of the tweets at two different stages, our solution considers both timestamps and meaning of the tweets at the same time. This work presents the Time Aware Knowledge Extraction (briefly TAKE) methodology, as a new approach to perform conceptual and temporal data analysis of tweets' content for microblog summarization. TAKE extends Fuzzy Formal Concept Analysis @cite_11 introducing time dependencies among objects, in order to provide a summary that follow the evolution of the story over the timeline. Furthermore, the proposed framework reveals good performances in terms of F-Measure, with optimal Recall and comparable values of Precision with respect to the compared approaches. Specifically, the timed fuzzy lattice extracted by TAKE enable us to support user requests providing less or more succinct summary according to the specific needs.
{ "cite_N": [ "@cite_11" ], "mid": [ "1970718874" ], "abstract": [ "In recent years, knowledge structuring is assuming important roles in several real world applications such as decision support, cooperative problem solving, e-commerce, Semantic Web and, even in planning systems. Ontologies play an important role in supporting automated processes to access information and are at the core of new strategies for the development of knowledge-based systems. Yet, developing an ontology is a time-consuming task which often needs an accurate domain expertise to tackle structural and logical difficulties in the definition of concepts as well as conceivable relationships. This work presents an ontology-based retrieval approach, that supports data organization and visualization and provides a friendly navigation model. It exploits the fuzzy extension of the Formal Concept Analysis theory to elicit conceptualizations from datasets and generate a hierarchy-based representation of extracted knowledge. An intuitive graphical interface provides a multi-facets view of the built ontology. Through a transparent query-based retrieval, final users navigate across concepts, relations and population." ] }
1501.06237
1528580407
Semi-supervised clustering is an very important topic in machine learning and computer vision. The key challenge of this problem is how to learn a metric, such that the instances sharing the same label are more likely close to each other on the embedded space. However, little attention has been paid to learn better representations when the data lie on non-linear manifold. Fortunately, deep learning has led to great success on feature learning recently. Inspired by the advances of deep learning, we propose a deep transductive semi-supervised maximum margin clustering approach. More specifically, given pairwise constraints, we exploit both labeled and unlabeled data to learn a non-linear mapping under maximum margin framework for clustering analysis. Thus, our model unifies transductive learning, feature learning and maximum margin techniques in the semi-supervised clustering framework. We pretrain the deep network structure with restricted Boltzmann machines (RBMs) layer by layer greedily, and optimize our objective function with gradient descent. By checking the most violated constraints, our approach updates the model parameters through error backpropagation, in which deep features are learned automatically. The experimental results shows that our model is significantly better than the state of the art on semi-supervised clustering.
The semi-supervised clustering with partial labels generally explores two directions to improve performance: (1) leverage more sophisticated classification models, such as maximum margin techniques @cite_24 @cite_22 ; (2) learn a better distance metric @cite_6 @cite_22 .
{ "cite_N": [ "@cite_24", "@cite_22", "@cite_6" ], "mid": [ "2105842272", "2106053110", "2003677307" ], "abstract": [ "Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary issue of designing classification algorithms that can deal with more complex outputs, such as trees, sequences, or sets. More generally, we consider problems involving multiple dependent output variables, structured output spaces, and classification problems with class attributes. In order to accomplish this, we propose to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems. The proposed method has important applications in areas such as computational biology, natural language processing, information retrieval extraction, and optical character recognition. Experiments from various domains involving different types of output spaces emphasize the breadth and generality of our approach.", "The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.", "We describe and analyze an online algorithm for supervised learning of pseudo-metrics. The algorithm receives pairs of instances and predicts their similarity according to a pseudo-metric. The pseudo-metrics we use are quadratic forms parameterized by positive semi-definite matrices. The core of the algorithm is an update rule that is based on successive projections onto the positive semi-definite cone and onto half-space constraints imposed by the examples. We describe an efficient procedure for performing these projections, derive a worst case mistake bound on the similarity predictions, and discuss a dual version of the algorithm in which it is simple to incorporate kernel operators. The online algorithm also serves as a building block for deriving a large-margin batch algorithm. We demonstrate the merits of the proposed approach by conducting experiments on MNIST dataset and on document filtering." ] }
1501.06237
1528580407
Semi-supervised clustering is an very important topic in machine learning and computer vision. The key challenge of this problem is how to learn a metric, such that the instances sharing the same label are more likely close to each other on the embedded space. However, little attention has been paid to learn better representations when the data lie on non-linear manifold. Fortunately, deep learning has led to great success on feature learning recently. Inspired by the advances of deep learning, we propose a deep transductive semi-supervised maximum margin clustering approach. More specifically, given pairwise constraints, we exploit both labeled and unlabeled data to learn a non-linear mapping under maximum margin framework for clustering analysis. Thus, our model unifies transductive learning, feature learning and maximum margin techniques in the semi-supervised clustering framework. We pretrain the deep network structure with restricted Boltzmann machines (RBMs) layer by layer greedily, and optimize our objective function with gradient descent. By checking the most violated constraints, our approach updates the model parameters through error backpropagation, in which deep features are learned automatically. The experimental results shows that our model is significantly better than the state of the art on semi-supervised clustering.
The maximum margin clustering (MMC) aims to find the hyperplanes that can partition the data into different clusters over all possible labels with large margins @cite_10 @cite_28 @cite_12 . Nevertheless, the accuracy of the clustering results by MMC may not be good sometimes due to the nature of its unsupervised learning @cite_11 . Thus, it is interested to incorporate semi-supervised information, e.g. the pairwise constraints, into the recently proposed maximum margin clustering framework. Recent research demonstrates the advantages by leveraging pairwise constraints on the semi-supervised clustering problems @cite_30 @cite_13 @cite_2 @cite_17 @cite_1 @cite_8 . In particular, COPKmeans [11] is a semi-supervised variant of Kmeans, by following the same clustering procedure of Kmeans while avoiding violations of pairwise constraints. MPCKmeans @cite_8 extended Kmeans and utilized both metric learning and pairwise constraints in the clustering process. More recently, @cite_3 show that they can improve classification with pairwise constraints under maximum margin framework. @cite_4 leverage the margin-based approach on the semi-supervised clustering problems, and yield competitive results.
{ "cite_N": [ "@cite_30", "@cite_13", "@cite_4", "@cite_8", "@cite_28", "@cite_1", "@cite_17", "@cite_3", "@cite_2", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2134089414", "1596382552", "", "2096100960", "2149982386", "1564583583", "2159583439", "", "2117154949", "2132820034", "", "2161498107" ], "abstract": [ "Clustering is traditionally viewed as an unsupervised method for data analysis. However, in some cases information about the problem domain is available in addition to the data instances themselves. In this paper, we demonstrate how the popular k-means clustering algorithm can be protably modied to make use of this information. In experiments with articial constraints on six data sets, we observe improvements in clustering accuracy. We also apply this method to the real-world problem of automatically detecting road lanes from GPS data and observe dramatic increases in performance.", "We present an improved method for clustering in the presence of very limited supervisory information, given as pairwise instance constraints. By allowing instance-level constraints to have space-level inductive implications, we are able to successfully incorporate constraints for a wide range of data set types. Our method greatly improves on the previously studied constrained k-means algorithm, generally requiring less than half as many constraints to achieve a given accuracy on a range of real-world data, while also being more robust when over-constrained. We additionally discuss an active learning algorithm which increases the value of constraints even further.", "", "Semi-supervised clustering employs a small amount of labeled data to aid unsupervised learning. Previous work in the area has utilized supervised data in one of two approaches: 1) constraint-based methods that guide the clustering algorithm towards a better grouping of the data, and 2) distance-function learning methods that adapt the underlying similarity metric used by the clustering algorithm. This paper provides new methods for the two approaches as well as presents a new semi-supervised clustering algorithm that integrates both of these techniques in a uniform, principled framework. Experimental results demonstrate that the unified approach produces better clusters than both individual approaches as well as previously proposed semi-supervised clustering algorithms.", "Maximum margin clustering was proposed lately and has shown promising performance in recent studies [1, 2]. It extends the theory of support vector machine to unsupervised learning. Despite its good performance, there are three major problems with maximum margin clustering that question its efficiency for real-world applications. First, it is computationally expensive and difficult to scale to large-scale datasets because the number of parameters in maximum margin clustering is quadratic in the number of examples. Second, it requires data preprocessing to ensure that any clustering boundary will pass through the origins, which makes it unsuitable for clustering unbalanced dataset. Third, it is sensitive to the choice of kernel functions, and requires external procedure to determine the appropriate values for the parameters of kernel functions. In this paper, we propose \"generalized maximum margin clustering\" framework that addresses the above three problems simultaneously. The new framework generalizes the maximum margin clustering algorithm by allowing any clustering boundaries including those not passing through the origins. It significantly improves the computational efficiency by reducing the number of parameters. Furthermore, the new framework is able to automatically determine the appropriate kernel matrix without any labeled data. Finally, we show a formal connection between maximum margin clustering and spectral clustering. We demonstrate the efficiency of the generalized maximum margin clustering algorithm using both synthetic datasets and real datasets from the UCI repository.", "We present an approach to clustering based on the observa- tion that \"it is easier to criticize than to construct.\" Our approach of semi- supervised clustering allows a user to iteratively provide feedback to a clus- tering algorithm. The feedback is incorporated in the form of constraints, which the clustering algorithm attempts to satisfy on future iterations. These constraints allow the user to guide the clusterer toward clusterings of the data that the user finds more useful. We demonstrate semi-supervised clustering with a system that learns to cluster news stories from a Reuters data set. 1", "We address the problem of learning distance metrics using side-information in the form of groups of \"similar\" points. We propose to use the RCA algorithm, which is a simple and efficient algorithm for learning a full ranked Mahalanobis metric (, 2002). We first show that RCA obtains the solution to an interesting optimization problem, founded on an information theoretic basis. If the Mahalanobis matrix is allowed to be singular, we show that Fisher's linear discriminant followed by RCA is the optimal dimensionality reduction algorithm under the same criterion. We then show how this optimization problem is related to the criterion optimized by another recent algorithm for metric learning (, 2002), which uses the same kind of side information. We empirically demonstrate that learning a distance metric using the RCA algorithm significantly improves clustering performance, similarly to the alternative algorithm. Since the RCA algorithm is much more efficient and cost effective than the alternative, as it only uses closed form expressions of the data, it seems like a preferable choice for the learning of full rank Mahalanobis distances.", "", "Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance.", "We propose a new method for clustering based on finding maximum margin hyperplanes through data. By reformulating the problem in terms of the implied equivalence relation matrix, we can pose the problem as a convex integer program. Although this still yields a difficult computational problem, the hard-clustering constraints can be relaxed to a soft-clustering formulation which can be feasibly solved with a semidefinite program. Since our clustering technique only depends on the data through the kernel matrix, we can easily achieve nonlinear clusterings in the same manner as spectral clustering. Experimental results show that our maximum margin clustering technique often obtains more accurate results than conventional clustering methods. The real benefit of our approach, however, is that it leads naturally to a semi-supervised training method for support vector machines. By maximizing the margin simultaneously on labeled and unlabeled training data, we achieve state of the art performance by using a single, integrated learning principle.", "", "The pairwise constraints specifying whether a pair of samples should be grouped together or not have been successfully incorporated into the conventional clustering methods such as k-means and spectral clustering for the performance enhancement. Nevertheless, the issue of pairwise constraints has not been well studied in the recently proposed maximum margin clustering (MMC), which extends the maximum margin framework in supervised learning for clustering and often shows a promising performance. This paper therefore proposes a pairwise constrained MMC algorithm. Based on the maximum margin idea in MMC, we propose a set of effective loss functions for discouraging the violation of given pairwise constraints. For the resulting optimization problem, we show that the original nonconvex problem in our approach can be decomposed into a sequence of convex quadratic program problems via constrained concave-convex procedure (CCCP). Subsequently, we present an efficient subgradient projection optimization method to solve each convex problem in the CCCP sequence. Experiments on a number of real-world data sets show that the proposed constrained MMC algorithm is scalable and outperforms the existing constrained MMC approach as well as the typical semi-supervised clustering counterparts." ] }
1501.06237
1528580407
Semi-supervised clustering is an very important topic in machine learning and computer vision. The key challenge of this problem is how to learn a metric, such that the instances sharing the same label are more likely close to each other on the embedded space. However, little attention has been paid to learn better representations when the data lie on non-linear manifold. Fortunately, deep learning has led to great success on feature learning recently. Inspired by the advances of deep learning, we propose a deep transductive semi-supervised maximum margin clustering approach. More specifically, given pairwise constraints, we exploit both labeled and unlabeled data to learn a non-linear mapping under maximum margin framework for clustering analysis. Thus, our model unifies transductive learning, feature learning and maximum margin techniques in the semi-supervised clustering framework. We pretrain the deep network structure with restricted Boltzmann machines (RBMs) layer by layer greedily, and optimize our objective function with gradient descent. By checking the most violated constraints, our approach updates the model parameters through error backpropagation, in which deep features are learned automatically. The experimental results shows that our model is significantly better than the state of the art on semi-supervised clustering.
How to learn a good metric over input space is critical for a successful semi-supervised clustering approach. Hence, another direction for clustering is to learn a distance metric @cite_2 @cite_6 @cite_15 @cite_7 @cite_23 @cite_22 which can reflect the underlying relationships between the input instance pairs. The pseudo-metric @cite_6 parameterized by positive semi-definite matrices (PSD) is learned with an online updating rule, that alternates between projections onto PSD and onto half-space constraints imposed by the instance pairs. @cite_2 proposed to learn a distance metric (Mahalanobis) that respects pairwise constraints for clustering. In @cite_23 , an information-theoretic approach to learning a Mahalanobis distance function via LogDet divergence is proposed. Recently, a supervised approach to learn Mahalanobis metric is also proposed in @cite_22 , by minimizing the pairwise distances between instances in the same cluster, while increasing the separation between data points with dissimilar classes. To handle the data that lies on non-linear manifolds, kernel methods are widely used. Unfortunately, these non-linear embedding algorithms for use is shallow methods.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_6", "@cite_23", "@cite_2", "@cite_15" ], "mid": [ "2106053110", "2104752854", "2003677307", "", "2117154949", "2144935315" ], "abstract": [ "The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.", "We present an algorithm for learning a quadratic Gaussian metric (Mahalanobis distance) for use in classification tasks. Our method relies on the simple geometric intuition that a good metric is one under which points in the same class are simultaneously near each other and far from points in the other classes. We construct a convex optimization problem whose solution generates such a metric by trying to collapse all examples in the same class to a single point and push examples in other classes infinitely far away. We show that when the metric we learn is used in simple classifiers, it yields substantial improvements over standard alternatives on a variety of problems. We also discuss how the learned metric may be used to obtain a compact low dimensional feature representation of the original input space, allowing more efficient classification with very little reduction in performance.", "We describe and analyze an online algorithm for supervised learning of pseudo-metrics. The algorithm receives pairs of instances and predicts their similarity according to a pseudo-metric. The pseudo-metrics we use are quadratic forms parameterized by positive semi-definite matrices. The core of the algorithm is an update rule that is based on successive projections onto the positive semi-definite cone and onto half-space constraints imposed by the examples. We describe an efficient procedure for performing these projections, derive a worst case mistake bound on the similarity predictions, and discuss a dual version of the algorithm in which it is simple to incorporate kernel operators. The online algorithm also serves as a building block for deriving a large-margin batch algorithm. We demonstrate the merits of the proposed approach by conducting experiments on MNIST dataset and on document filtering.", "", "Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance.", "In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and fast classification. Unlike other methods, our classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. The performance of the method is demonstrated on several data sets, both for metric learning and linear dimensionality reduction." ] }
1501.06237
1528580407
Semi-supervised clustering is an very important topic in machine learning and computer vision. The key challenge of this problem is how to learn a metric, such that the instances sharing the same label are more likely close to each other on the embedded space. However, little attention has been paid to learn better representations when the data lie on non-linear manifold. Fortunately, deep learning has led to great success on feature learning recently. Inspired by the advances of deep learning, we propose a deep transductive semi-supervised maximum margin clustering approach. More specifically, given pairwise constraints, we exploit both labeled and unlabeled data to learn a non-linear mapping under maximum margin framework for clustering analysis. Thus, our model unifies transductive learning, feature learning and maximum margin techniques in the semi-supervised clustering framework. We pretrain the deep network structure with restricted Boltzmann machines (RBMs) layer by layer greedily, and optimize our objective function with gradient descent. By checking the most violated constraints, our approach updates the model parameters through error backpropagation, in which deep features are learned automatically. The experimental results shows that our model is significantly better than the state of the art on semi-supervised clustering.
On the other hand, recent advances in deep learning @cite_5 @cite_33 @cite_29 have sparked great interest in dimension reduction @cite_0 @cite_27 and classification problems @cite_5 @cite_21 . In a sense, the success of deep learning lies on learned features, which are useful for supervised unsupervised tasks @cite_34 @cite_29 . For example, the binary hidden units in the discriminative Restricted Boltzmann Machines (RBMs) @cite_32 @cite_14 can model latent features of the data that improve classification. The deep learning for semi-supervised embedding @cite_27 extends shallow semi-supervised learning techniques such as kernel methods with deep neural networks, and yield promising results. The work of @cite_9 is most related to our proposed algorithm. It presented deep learning with support vector machines, which can learn features under discriminative learning framework automatically with labeled data. However, their approach is totally supervised and for classification problems, while our model is for semi-supervised clustering problems. Compared to conventional methods, our model consider both feature learning and transductive principles in our semi-supervised clustering model, so that it can handles complex data distribution and learns a better non-linear mapping to improve clustering performance.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_29", "@cite_21", "@cite_9", "@cite_32", "@cite_0", "@cite_27", "@cite_5", "@cite_34" ], "mid": [ "2146540262", "2145094598", "2163922914", "", "825525975", "", "2100495367", "2159291644", "2136922672", "2138857742" ], "abstract": [ "The paper develops a connection between traditional perceptron algorithms and recently introduced herding algorithms. It is shown that both algorithms can be viewed as an application of the perceptron cycling theorem. This connection strengthens some herding results and suggests new (supervised) herding algorithms that, like CRFs or discriminative RBMs, make predictions by conditioning on the input attributes. We develop and investigate variants of conditional herding, and show that conditional herding leads to practical algorithms that perform better than or on par with related classifiers such as the voted perceptron and the discriminative RBM.", "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.", "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.", "", "", "", "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.", "We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.", "We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.", "Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks involve an unsupervised learning component, usually in an unsupervised pre-training phase. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. The main question investigated here is the following: how does unsupervised pre-training work? Answering this questions is important if learning in deep architectures is to be further improved. We propose several explanatory hypotheses and test them through extensive simulations. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples. The experiments confirm and clarify the advantage of unsupervised pre-training. The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre-training." ] }
1501.06180
2154582212
Motivated by the center-surround mechanism in the human visual attention system, we propose to use average contrast maps for the challenge of pedestrian detection in street scenes due to the observation that pedestrians indeed exhibit discriminative contrast texture. Our main contributions are the first to design a local statistical multichannel descriptor to incorporate both color and gradient information. Second, we introduce a multidirection and multiscale contrast scheme based on grid cells to integrate expressive local variations. Contributing to the issue of selecting most discriminative features for assessing and classification, we perform extensive comparisons with respect to statistical descriptors, contrast measurements, and scale structures. By this way, we obtain reasonable results under various configurations. Empirical findings from applying our optimized detector on the INRIA and Caltech pedestrian datasets show that our features yield state-of-the-art performance in pedestrian detection.
Most computational approaches to visual attention determine center-surround contrasts by DoG-filters or approximations of these @cite_23 . Recently, several researchers represented the central and surrounding areas in terms of feature distributions so as to capture more information about the areas. These distributions were either discrete, in form of histograms @cite_48 , or continuous, fitted to a normal distribution @cite_49 , and various distance measures can be applied between central and surrounding distributions to quantify local contrast. However, we notice that the above strategies only achieve reasonable results for rather conspicuous scenarios, a big red flower standing out in surrounding green leaves. In fact, the background in our case is much more complex and the previous contrast models are not guaranteed to perform well. Consequently, we train and evaluate specialized contrast schemes in this paper and aim to find out the optimal configuration for our applications.
{ "cite_N": [ "@cite_48", "@cite_49", "@cite_23" ], "mid": [ "2065985528", "2280306935", "2128272608" ], "abstract": [ "In this paper, we introduce a new method to detect salient objects in images. The approach is based on the standard structure of cognitive visual attention models, but realizes the computation of saliency in each feature dimension in an information-theoretic way. The method allows a consistent computation of all feature channels and a well-founded fusion of these channels to a saliency map. Our framework enables the computation of arbitrarily scaled features and local center-surround pairs in an efficient manner. We show that our approach outperforms eight state-of-the-art saliency detectors in terms of precision and recall.", "Saliency is an attribute that is not included in an object itself, but arises from complex relations to the scene. Common belief in neuroscience is that objects are eye-catching if they exhibit an anomaly in some basic feature of human perception. This enables detection of object-like structures without prior knowledge. In this paper, we introduce an approach that models these object-to-scene relations based on probability theory. We rely on the conventional structure of cognitive visual attention systems, measuring saliency by local center to surround differences on several basic feature cues and multiple scales, but innovate how to model appearance and to quantify differences. Therefore, we propose an efficient procedure to compute ML-estimates for (multivariate) normal distributions of local feature statistics. Reducing feature statistics to Gaussians facilitates a closed-form solution for the W 2-distance (Wasserstein metric based on the Euclidean norm) between a center and a surround distribution. On a widely used benchmark for salient object detection, our approach, named CoDi-Saliency (for Continuous Distributions), outperformed nine state-of-the-art saliency detectors in terms of precision and recall.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail." ] }
1501.06272
2949235290
With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multilevel semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets.
As described before, the existing hash methods can be roughly divided into two categories: data-independent and data-dependent. Here we mainly discuss data-dependent hash methods preserving the semantic structure which this paper focuses on. Iterative quantization with canonical correlation analysis (CCA-ITQ) @cite_11 utilizes CCA with labels to reduce the dimensionality of input data and binarizes the outcome through minimizing the quantization error, where only the pointwise label information is exploited to guide hash function learning. By comparison, some approaches try to preserve the semantic similarity based on pairwise relation. Boosted similarity sensitive coding (BSSC) @cite_14 assigns each pair of data points a label to learn a set of weak classifiers as hash functions. Semi-supervised hashing (SSH) @cite_24 minimizes an empirical error over the labeled pairs of points and makes hash codes balanced and uncorrelated to avoid overfitting. Motivated by latent structural SVM, minimal loss hashing (MLH) @cite_4 proposes a pairwise hinge-like loss function and minimizes its upper bound to learn similarity-preserving binary codes.
{ "cite_N": [ "@cite_24", "@cite_14", "@cite_4", "@cite_11" ], "mid": [ "2044195942", "2152926413", "2221852422", "1974647172" ], "abstract": [ "Large scale image search has recently attracted considerable attention due to easy availability of huge amounts of data. Several hashing methods have been proposed to allow approximate but highly efficient search. Unsupervised hashing methods show good performance with metric distances but, in image search, semantic similarity is usually given in terms of labeled pairs of images. There exist supervised hashing methods that can handle such semantic similarity but they are prone to overfitting when labeled data is small or noisy. Moreover, these methods are usually very slow to train. In this work, we propose a semi-supervised hashing method that is formulated as minimizing empirical error on the labeled data while maximizing variance and independence of hash bits over the labeled and unlabeled data. The proposed method can handle both metric as well as semantic similarity. The experimental results on two large datasets (up to one million samples) demonstrate its superior performance over state-of-the-art supervised and unsupervised methods.", "Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly become prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends locality-sensitive hashing, a recently developed method to find approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call parameter-sensitive hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.", "We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set." ] }
1501.06272
2949235290
With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multilevel semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets.
Furthermore, order-preserving approaches, which are more related to this paper, explicitly use ranking information in objective functions to learn hash codes that preserve the similarity order in the feature or semantic space. Order preserving hashing (OPH) @cite_13 formulates an alignment between the similarity orders computed respectively from the original Euclidean space and the Hamming space, which can be solved using the quadratic penalty algorithm. On the basis of @cite_4 , hamming distance metric learning (HDML) @cite_7 develops a metric learning framework based on a triplet ranking loss to preserve relative similarity. However, this triplet loss function only considers local ranking information and is limited in capturing information about multilevel similarity. By using a triplet representation for listwise supervision, ranking-based supervised hashing (RSH) @cite_23 minimizes the inconsistency of ranking order between the hamming and original spaces to keep global ranking order. Different from RSH, our method leverages deep learning model to discover deeper semantic similarity and can scale well on large training sets. Column generation hashing (CGH) @cite_22 and StructHash @cite_17 combine ranking information with the boosting framework to learn a weighted hamming embedding. In contrast, our method needs no extra weight to rank hash codes.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_7", "@cite_23", "@cite_13", "@cite_17" ], "mid": [ "2221852422", "2949478753", "2113307832", "2126210882", "2089632823", "2952986702" ], "abstract": [ "We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "Fast nearest neighbor searching is becoming an increasingly important tool in solving many large-scale problems. Recently a number of approaches to learning data-dependent hash functions have been developed. In this work, we propose a column generation based method for learning data-dependent hash functions on the basis of proximity comparison information. Given a set of triplets that encode the pairwise proximity comparison information, our method learns hash functions that preserve the relative comparison relationships in the data as well as possible within the large-margin learning framework. The learning procedure is implemented using column generation and hence is named CGHash. At each iteration of the column generation procedure, the best hash function is selected. Unlike most other hashing methods, our method generalizes to new data points naturally; and has a training objective which is convex, thus ensuring that the global optimum can be identified. Experiments demonstrate that the proposed method learns compact binary codes and that its retrieval performance compares favorably with state-of-the-art methods when tested on a few benchmark datasets.", "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "Hashing techniques have been intensively investigated in the design of highly efficient search engines for large-scale computer vision applications. Compared with prior approximate nearest neighbor search approaches like tree-based indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. However, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pair wise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage list wise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via list wise supervision can provide superior search accuracy without incurring heavy computational overhead.", "In this paper, we propose a novel method to learn similarity-preserving hash functions for approximate nearest neighbor (NN) search. The key idea is to learn hash functions by maximizing the alignment between the similarity orders computed from the original space and the ones in the hamming space. The problem of mapping the NN points into different hash codes is taken as a classification problem in which the points are categorized into several groups according to the hamming distances to the query. The hash functions are optimized from the classifiers pooled over the training points. Experimental results demonstrate the superiority of our approach over existing state-of-the-art hashing techniques.", "Hashing has proven a valuable tool for large-scale information retrieval. Despite much success, existing hashing methods optimize over simple objectives such as the reconstruction error or graph Laplacian related loss functions, instead of the performance evaluation criteria of interest---multivariate performance measures such as the AUC and NDCG. Here we present a general framework (termed StructHash) that allows one to directly optimize multivariate performance measures. The resulting optimization problem can involve exponentially or infinitely many variables and constraints, which is more challenging than standard structured output learning. To solve the StructHash optimization problem, we use a combination of column generation and cutting-plane techniques. We demonstrate the generality of StructHash by applying it to ranking prediction and image retrieval, and show that it outperforms a few state-of-the-art hashing methods." ] }
1501.06272
2949235290
With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multilevel semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets.
Deep learning models, particularly deep convolutional neural networks (CNNs), have achieved great success in various visual tasks such as image classification, annotation, retrieval and object detection @cite_26 @cite_32 @cite_10 @cite_18 @cite_31 due to their powerful representation learning capability. Some ranking loss based CNNs have been explored in these tasks. @cite_18 use a ranking loss based on triplet sampling in CNNs to learn image similarity metric. @cite_32 incorporate a warp approximate ranking into CNNs for image annotation. There are a few hashing methods that also use deep models. @cite_28 use a deep generative model as hash functions. Similarly, @cite_1 model a deep network by using multiple layers of RBMs. Given approximate hash codes learned from pairwise similarity matrix decomposition, @cite_3 learn hash functions using CNNs to fit the learned hash codes. However, these methods do not explicitly impose the ranking constraint on the deep models, which can not figure out the multi-level similarity problem.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_28", "@cite_1", "@cite_32", "@cite_3", "@cite_31", "@cite_10" ], "mid": [ "1975517671", "", "", "2154956324", "1514027499", "2293824885", "2102605133", "2123024445" ], "abstract": [ "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.", "", "", "The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. Our approach uses recently developed machine learning techniques to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using our scheme, it is possible to perform real-time searches with millions from the Internet using a single large PC and obtain recognition results comparable to the full descriptor. Using our codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques.", "Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications. While existing work usually use conventional visual features for multilabel annotation, features based on Deep Neural Networks have shown potential to significantly boost performance. In this work, we propose to leverage the advantage of such features and analyze key components that lead to better performances. Specifically, we show that a significant performance gain could be obtained by combining convolutional architectures with approximate top- @math ranking objectives, as thye naturally fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset outperforms the conventional visual features by about 10 , obtaining the best reported performance in the literature.", "Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of handcrafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HHT where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model." ] }
1501.06262
1985912834
Human activity understanding with 3D depth sensors has received increasing attention in multimedia processing and interactions. This work targets on developing a novel deep model for automatic activity recognition from RGB-D videos. We represent each human activity as an ensemble of cubic-like video segments, and learn to discover the temporal structures for a category of activities, i.e. how the activities to be decomposed in terms of classification. Our model can be regarded as a structured deep architecture, as it extends the convolutional neural networks (CNNs) by incorporating structure alternatives. Specifically, we build the network consisting of 3D convolutions and max-pooling operators over the video segments, and introduce the latent variables in each convolutional layer manipulating the activation of neurons. Our model thus advances existing approaches in two aspects: (i) it acts directly on the raw inputs (grayscale-depth data) to conduct recognition instead of relying on hand-crafted features, and (ii) the model structure can be dynamically adjusted accounting for the temporal variations of human activities, i.e. the network configuration is allowed to be partially activated during inference. For model training, we propose an EM-type optimization method that iteratively (i) discovers the latent structure by determining the decomposed actions for each training example, and (ii) learns the network parameters by using the back-propagation algorithm. Our approach is validated in challenging scenarios, and outperforms state-of-the-art methods. A large human activity database of RGB-D videos is presented in addition.
A batch of works on human action activity understanding mainly focused on developing robust and descriptive features @cite_39 @cite_28 @cite_29 @cite_42 @cite_22 @cite_24 @cite_34 . Xia and Aggarwal @cite_39 extracted spatio-temporal interest points from depth videos (DSTIP) and developed a depth cuboid similarity feature (DCSF) to model human activities. Oreifej and Liu @cite_29 proposed to capture spatio-temporal changes of activities by using a histogram of oriented 4D surface normals (HON4D). Most of these methods, however, overlooked detailed spatio-temporal structure information, and limited in periodic activities.
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_29", "@cite_42", "@cite_39", "@cite_24", "@cite_34" ], "mid": [ "2002585172", "2012451022", "2085735683", "2052916967", "2162415752", "2008824967", "" ], "abstract": [ "In this work, we present a SIFT-Bag based generative-to-discriminative framework for addressing the problem of video event recognition in unconstrained news videos. In the generative stage, each video clip is encoded as a bag of SIFT feature vectors, the distribution of which is described by a Gaussian Mixture Models (GMM). In the discriminative stage, the SIFT-Bag Kernel is designed for characterizing the property of Kullback-Leibler divergence between the specialized GMMs of any two video clips, and then this kernel is utilized for supervised learning in two ways. On one hand, this kernel is further refined in discriminating power for centroid-based video event classification by using the Within-Class Covariance Normalization approach, which depresses the kernel components with high-variability for video clips of the same event. On the other hand, the SIFT-Bag Kernel is used in a Support Vector Machine for margin-based video event classification. Finally, the outputs from these two classifiers are fused together for final decision. The experiments on the TRECVID 2005 corpus demonstrate that the mean average precision is boosted from the best reported 38.2 in [36] to 60.4 based on our new framework.", "We present a new method to classify human activities by leveraging on the cues available from depth images alone. Towards this end, we propose a descriptor which couples depth and spatial information of the segmented body to describe a human pose. Unique poses (i.e. codewords) are then identified by a spatial-based clustering step. Given a video sequence of depth images, we segment humans from the depth images and represent these segmented bodies as a sequence of codewords. We exploit unique poses of an activity and the temporal ordering of these poses to learn subsequences of codewords which are strongly discriminative for the activity. Each discriminative subsequence acts as a classifier and we learn a boosted ensemble of discriminative subsequences to assign a confidence score for the activity label of the test sequence. Unlike existing methods which demand accurate tracking of 3D joint locations or couple depth with color image information as recognition cues, our method requires only the segmentation masks from depth images to recognize an activity. Experimental results on the publicly available Human Activity Dataset (which comprises 12 challenging activities) demonstrate the validity of our method, where we attain a precision recall of 78.1 75.4 when the person was not seen before in the training set, and 94.6 93.1 when the person was seen before.", "We present a new descriptor for activity recognition from videos acquired by a depth sensor. Previous descriptors mostly compute shape and motion features independently, thus, they often fail to capture the complex joint shape-motion cues at pixel-level. In contrast, we describe the depth sequence using a histogram capturing the distribution of the surface normal orientation in the 4D space of time, depth, and spatial coordinates. To build the histogram, we create 4D projectors, which quantize the 4D space and represent the possible directions for the 4D normal. We initialize the projectors using the vertices of a regular polychoron. Consequently, we refine the projectors using a discriminative density measure, such that additional projectors are induced in the directions where the 4D normals are more dense and discriminative. Through extensive experiments, we demonstrate that our descriptor better captures the joint shape-motion cues in the depth sequence, and thus outperforms the state-of-the-art on all relevant benchmarks.", "Human action recognition and localization is a challenging vision task with promising applications. To tackle this problem, recently developed commodity depth sensor (e.g., Microsoft Kinect) has opened up new opportunities with several developed human motion features based on depth image for action representation. However, how depth information can be effectively adopted in the middle or high level representation in action detection, in particular, the depth induced three dimensional contextual information for modeling interactions between human-human, human-object and human-surroundings has yet been explored. In this paper, we propose a novel action recognition and localization framework which effectively fuses depth-induced contextual information from different levels of the processing pipeline for understanding various interactions. First, depth image is combined with grayscale image for more robust human subject and object detection. Second, three dimensional spatial and temporal relationship among human subjects or objects is represented based on the combination of grayscale and depth images. Third, depth information is further utilized to represent different types of indoor scenes. Finally, we fuse these multiple stage depth-induced contextual information to yield an unified action detection framework. Extensive experiments on a challenging grayscale + depth human action detection benchmark database demonstrate the effectiveness of the depth-induced contextual information and the high detection accuracy of the proposed framework.", "Local spatio-temporal interest points (STIPs) and the resulting features from RGB videos have been proven successful at activity recognition that can handle cluttered backgrounds and partial occlusions. In this paper, we propose its counterpart in depth video and show its efficacy on activity recognition. We present a filtering method to extract STIPs from depth videos (called DSTIP) that effectively suppress the noisy measurements. Further, we build a novel depth cuboid similarity feature (DCSF) to describe the local 3D depth cuboid around the DSTIPs with an adaptable supporting size. We test this feature on activity recognition application using the public MSRAction3D, MSRDailyActivity3D datasets and our own dataset. Experimental evaluation shows that the proposed approach outperforms state-of-the-art activity recognition algorithms on depth videos, and the framework is more widely applicable than existing approaches. We also give detailed comparisons with other features and analysis of choice of parameters as a guidance for applications.", "In this paper, we propose an effective method to recognize human actions from sequences of depth maps, which provide additional body shape and motion information for action recognition. In our approach, we project depth maps onto three orthogonal planes and accumulate global activities through entire video sequences to generate the Depth Motion Maps (DMM). Histograms of Oriented Gradients (HOG) are then computed from DMM as the representation of an action video. The recognition results on Microsoft Research (MSR) Action3D dataset show that our approach significantly outperforms the state-of-the-art methods, although our representation is much more compact. In addition, we investigate how many frames are required in our framework to recognize actions on the MSR Action3D dataset. We observe that a short sub-sequence of 30-35 frames is sufficient to achieve comparable results to that operating on entire video sequences.", "" ] }
1501.06262
1985912834
Human activity understanding with 3D depth sensors has received increasing attention in multimedia processing and interactions. This work targets on developing a novel deep model for automatic activity recognition from RGB-D videos. We represent each human activity as an ensemble of cubic-like video segments, and learn to discover the temporal structures for a category of activities, i.e. how the activities to be decomposed in terms of classification. Our model can be regarded as a structured deep architecture, as it extends the convolutional neural networks (CNNs) by incorporating structure alternatives. Specifically, we build the network consisting of 3D convolutions and max-pooling operators over the video segments, and introduce the latent variables in each convolutional layer manipulating the activation of neurons. Our model thus advances existing approaches in two aspects: (i) it acts directly on the raw inputs (grayscale-depth data) to conduct recognition instead of relying on hand-crafted features, and (ii) the model structure can be dynamically adjusted accounting for the temporal variations of human activities, i.e. the network configuration is allowed to be partially activated during inference. For model training, we propose an EM-type optimization method that iteratively (i) discovers the latent structure by determining the decomposed actions for each training example, and (ii) learns the network parameters by using the back-propagation algorithm. Our approach is validated in challenging scenarios, and outperforms state-of-the-art methods. A large human activity database of RGB-D videos is presented in addition.
Several compositional approaches were studied for complex scenarios and achieved substantial progresses @cite_10 @cite_33 @cite_5 @cite_26 @cite_2 @cite_32 @cite_3 @cite_14 , and they decomposed an activity into deformable parts and enriched the models with contextual information. For instance, @cite_10 recognized human activities in common videos by training the hidden conditional random fields in a max-margin framework. For activity recognition in RGB-D data, @cite_26 employed the latent structural SVM to train the model with part-based pose trajectories and object manipulations. An ensemble model of actionlets were studied in @cite_32 to represent 3D human activities with a new feature called local occupancy pattern (LOP). To handle more complicated activities with large temporal variations, some powerful models @cite_27 @cite_30 @cite_41 further discovered temporal structures of activities by localizing sequential actions. For example, Wang and Wu @cite_30 proposed to solve the temporal alignment of actions by maximum margin temporal warping. @cite_27 captured the latent temporal structures of 2D activities based on the variable-duration hidden Markov model. Koppula and Saxena @cite_36 applied the Conditional Random Fields to model the sub-activities and affordances of the objects for 3D activity recognition.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_14", "@cite_33", "@cite_41", "@cite_36", "@cite_32", "@cite_3", "@cite_27", "@cite_2", "@cite_5", "@cite_10" ], "mid": [ "2160144863", "1992681465", "2128158057", "2139857301", "2137275576", "2114216982", "2143267104", "2163395651", "2142258645", "2063153269", "2009038235", "2162670331" ], "abstract": [ "Temporal misalignment and duration variation in video actions largely influence the performance of action recognition, but it is very difficult to specify effective temporal alignment on action sequences. To address this challenge, this paper proposes a novel discriminative learning-based temporal alignment method, called maximum margin temporal warping (MMTW), to align two action sequences and measure their matching score. Based on the latent structure SVM formulation, the proposed MMTW method is able to learn a phantom action template to represent an action class for maximum discrimination against other classes. The recognition of this action class is based on the associated learned alignment of the input action. Extensive experiments on five benchmark datasets have demonstrated that this MMTW model is able to significantly promote the accuracy and robustness of action recognition under temporal misalignment and variations.", "Understanding natural human activity involves not only identifying the action being performed, but also locating the semantic elements of the scene and describing the person's interaction with them. We present a system that is able to recognize complex, fine-grained human actions involving the manipulation of objects in realistic action sequences. Our method takes advantage of recent advances in sensors and pose trackers in learning an action model that draws on successful discriminative techniques while explicitly modeling both pose trajectories and object manipulations. By combining these elements in a single model, we are able to simultaneously recognize actions and track the location and manipulation of objects. To showcase this ability, we introduce a novel Cooking Action Dataset that contains video, depth readings, and pose tracks from a Kinect sensor. We show that our model outperforms existing state of the art techniques on this dataset as well as the VISINT dataset with only video sequences.", "The representation and recognition of complex semantic events (e.g. illegal parking, stealing objects) is a challenging task for high-level understanding of video sequence. To solve this problem, an attribute graph grammar for events modeling is studied in this paper. This grammar models the variability of semantic events by a set of meaningful ''event components'' with the spatio-temporal constraints. The event components are defined manually according to their semantic meaning, and further decomposed into atomic event primitives. These event primitives are learned on a object-trajectory table that describes mobile object attributes (location, velocity, and visibility) in a video sequence. A dictionary of temporal and spatial relations are defined to constrain the event primitives. With this representation, one observed event can be parsed into an ''event parse graph'', and all possible variability of one event can be modeled into an ''event And-Or graph'', in a syntactic way. The probability model of an ''event And-Or graph'' can be learned on a set of annotated event instances, and given a learned event And-Or graph, a Gibbs sampling scheme is utilized for inference on a testing video. In the experiments, we test events recognition performance of the proposed on both real indoor and outdoor videos and show quantitative recognition rate on the public LHI dataset.", "We address action recognition in videos by modeling the spatial-temporal structures of human poses. We start by improving a state of the art method for estimating human joint locations from videos. More precisely, we obtain the K-best estimations output by the existing method and incorporate additional segmentation cues and temporal constraints to select the best'' one. Then we group the estimated joints into five body parts (e.g. the left arm) and apply data mining techniques to obtain a representation for the spatial-temporal structures of human actions. This representation captures the spatial configurations of body parts in one frame (by spatial-part-sets) as well as the body part movements(by temporal-part-sets) which are characteristic of human actions. It is interpretable, compact, and also robust to errors on joint estimations. Experimental results first show that our approach is able to localize body joints more accurately than existing methods. Next we show that it outperforms state of the art action recognizers on the UCF sport, the Keck Gesture and the MSR-Action3D datasets.", "Complex human activities occurring in videos can be defined in terms of temporal configurations of primitive actions. Prior work typically hand-picks the primitives, their total number, and temporal relations (e.g., allow only followed-by), and then only estimates their relative significance for activity recognition. We advance prior work by learning what activity parts and their spatiotemporal relations should be captured to represent the activity, and how relevant they are for enabling efficient inference in realistic videos. We represent videos by spatiotemporal graphs, where nodes correspond to multiscale video segments, and edges capture their hierarchical, temporal, and spatial relationships. Access to video segments is provided by our new, multiscale segmenter. Given a set of training spatiotemporal graphs, we learn their archetype graph, and pdf's associated with model nodes and edges. The model adaptively learns from data relevant video segments and their relations, addressing the “what” and “how.” Inference and learning are formulated within the same framework - that of a robust, least-squares optimization - which is invariant to arbitrary permutations of nodes in spatiotemporal graphs. The model is used for parsing new videos in terms of detecting and localizing relevant activity parts. We out-perform the state of the art on benchmark Olympic and UT human-interaction datasets, under a favorable complexity-vs.-accuracy trade-off.", "We consider the problem of detecting past activities as well as anticipating which activity will happen in the future and how. We start by modeling the rich spatio-temporal relations between human poses and objects (called affordances) using a conditional random field (CRF). However, because of the ambiguity in the temporal segmentation of the sub-activities that constitute an activity, in the past as well as in the future, multiple graph structures are possible. In this paper, we reason about these alternate possibilities by reasoning over multiple possible graph structures. We obtain them by approximating the graph with only additive features, which lends to efficient dynamic programming. Starting with this proposal graph structure, we then design moves to obtain several other likely graph structures. We then show that our approach improves the state-of-the-art significantly for detecting past activities as well as for anticipating future activities, on a dataset of 120 activity videos collected from four subjects.", "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.", "Human activity analysis is an important and challenging task in video content analysis and understanding. In this paper, we focus on the activity of small human group, which involves countable persons and complex interactions. To cope with the variant number of participants and inherent interactions within the activity, we propose a hierarchical model with three layers to depict the characteristics at different granularities. In traditional methods, group activity is represented mainly based on motion information, such as human trajectories, but ignoring discriminative appearance information, e.g. the rough sketch of a pose style. In our approach, we take advantage of both the motion and the appearance information in the spatiotemporal activity context under the hierarchical model. These features are inhomogeneous. Therefore, we employ multiple kernel learning methods to fuse the features for group activity recognition. Experiments on a surveillance-like human group activity database demonstrate the validity of our approach and the recognition performance is promising.", "In this paper, we tackle the problem of understanding the temporal structure of complex events in highly varying videos obtained from the Internet. Towards this goal, we utilize a conditional model trained in a max-margin framework that is able to automatically discover discriminative and interesting segments of video, while simultaneously achieving competitive accuracies on difficult detection and recognition tasks. We introduce latent variables over the frames of a video, and allow our algorithm to discover and assign sequences of states that are most discriminative for the event. Our model is based on the variable-duration hidden Markov model, and models durations of states in addition to the transitions between states. The simplicity of our model allows us to perform fast, exact inference using dynamic programming, which is extremely important when we set our sights on being able to process a very large number of videos quickly and efficiently. We show promising results on the Olympic Sports dataset [16] and the 2011 TRECVID Multimedia Event Detection task [18]. We also illustrate and visualize the semantic understanding capabilities of our model.", "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2 on KTH (better by 3.3 ), 95.0 on UCF Sports (better by 3.7 ), 57.9 on UCF50 (baseline is 47.9 ), and 26.9 on HMDB51 (baseline is 23.2 ). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.", "Articulated configuration of human body parts is an essential representation of human motion, therefore is well suited for classifying human actions. In this work, we propose a novel approach to exploring the discriminative pose sub-patterns for effective action classification. These pose sub-patterns are extracted from a predefined set of 3D poses represented by hierarchical motion angles. The basic idea is motivated by the two observations: (1) There exist representative sub-patterns in each action class, from which the action class can be easily differentiated. (2) These sub-patterns frequently appear in the action class. By constructing a connection between frequent sub-patterns and the discriminative measure, we develop the SSPI, namely, the Support Sub-Pattern Induced learning algorithm for simultaneous feature selection and feature learning. Based on the algorithm, discriminative pose sub-patterns can be identified and used as a series of \"magnetic centers\" on the surface of normalized super-sphere for feature transform. The \"attractive forces\" from the sub-patterns determine the direction and step-length of the transform. This transformation makes a feature more discriminative while maintaining dimensionality invariance. Comprehensive experimental studies conducted on a large scale motion capture dataset demonstrate the effectiveness of the proposed approach for action classification and the superior performance over the state-of-the-art techniques.", "We present a discriminative part-based approach for human action recognition from video sequences using motion features. Our model is based on the recently proposed hidden conditional random field (HCRF) for object recognition. Similarly to HCRF for object recognition, we model a human action by a flexible constellation of parts conditioned on image observations. Differently from object recognition, our model combines both large-scale global features and local patch features to distinguish various actions. Our experimental results show that our model is comparable to other state-of-the-art approaches in action recognition. In particular, our experimental results demonstrate that combining large-scale global features and local patch features performs significantly better than directly applying HCRF on local patches alone. We also propose an alternative for learning the parameters of an HCRF model in a max-margin framework. We call this method the max-margin hidden conditional random field (MMHCRF). We demonstrate that MMHCRF outperforms HCRF in human action recognition. In addition, MMHCRF can handle a much broader range of complex hidden structures arising in various problems in computer vision." ] }
1501.06262
1985912834
Human activity understanding with 3D depth sensors has received increasing attention in multimedia processing and interactions. This work targets on developing a novel deep model for automatic activity recognition from RGB-D videos. We represent each human activity as an ensemble of cubic-like video segments, and learn to discover the temporal structures for a category of activities, i.e. how the activities to be decomposed in terms of classification. Our model can be regarded as a structured deep architecture, as it extends the convolutional neural networks (CNNs) by incorporating structure alternatives. Specifically, we build the network consisting of 3D convolutions and max-pooling operators over the video segments, and introduce the latent variables in each convolutional layer manipulating the activation of neurons. Our model thus advances existing approaches in two aspects: (i) it acts directly on the raw inputs (grayscale-depth data) to conduct recognition instead of relying on hand-crafted features, and (ii) the model structure can be dynamically adjusted accounting for the temporal variations of human activities, i.e. the network configuration is allowed to be partially activated during inference. For model training, we propose an EM-type optimization method that iteratively (i) discovers the latent structure by determining the decomposed actions for each training example, and (ii) learns the network parameters by using the back-propagation algorithm. Our approach is validated in challenging scenarios, and outperforms state-of-the-art methods. A large human activity database of RGB-D videos is presented in addition.
On the other hand, the past few years have seen a resurgence of research in the design of deep neural networks, and impressive progresses were made on learning image features from raw data @cite_8 @cite_40 @cite_12 @cite_13 @cite_6 @cite_21 . To address human action recognition from videos, @cite_31 developed a novel deep architecture of convolutional networks, where they extracted features from both spatial and temporal dimensions. Amer and Todorovic @cite_20 applied Sum Product Networks (SPNs) to model human activities based on variable primitive actions. Our deep structured model can be viewed as an extension of these existing architectures, in which we make the network reconfigurable during learning and inference.
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_6", "@cite_40", "@cite_31", "@cite_20", "@cite_13", "@cite_12" ], "mid": [ "2100495367", "", "", "1586730761", "1983364832", "2064052975", "", "1999192586" ], "abstract": [ "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.", "", "", "We address the problem of learning good features for understanding video data. We introduce a model that learns latent representations of image sequences from pairs of successive images. The convolutional architecture of our model allows it to scale to realistic image sizes whilst using a compact parametrization. In experiments on the NORB dataset, we show our model extracts latent \"flow fields\" which correspond to the transformation between the pair of input frames. We also use our model to extract low-level motion features in a multi-stage architecture for action recognition, demonstrating competitive performance on both the KTH and Hollywood2 datasets.", "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.", "This paper addresses recognition of human activities with stochastic structure, characterized by variable spacetime arrangements of primitive actions, and conducted by a variable number of actors. We demonstrate that modeling aggregate counts of visual words is surprisingly expressive enough for such a challenging recognition task. An activity is represented by a sum-product network (SPN). SPN is a mixture of bags-of-words (BoWs) with exponentially many mixture components, where subcomponents are reused by larger ones. SPN consists of terminal nodes representing BoWs, and product and sum nodes organized in a number of layers. The products are aimed at encoding particular configurations of primitive actions, and the sums serve to capture their alternative configurations. The connectivity of SPN and parameters of BoW distributions are learned under weak supervision using the EM algorithm. SPN inference amounts to parsing the SPN graph, which yields the most probable explanation (MPE) of the video in terms of activity detection and localization. SPN inference has linear complexity in the number of nodes, under fairly general conditions, enabling fast and scalable recognition. A new Volleyball dataset is compiled and annotated for evaluation. Our classification accuracy and localization precision and recall are superior to those of the state-of-the-art on the benchmark and our Volleyball datasets.", "", "Previous work on action recognition has focused on adapting hand-designed local features, such as SIFT or HOG, from static images to the video domain. In this paper, we propose using unsupervised feature learning as a way to learn features directly from video data. More specifically, we present an extension of the Independent Subspace Analysis algorithm to learn invariant spatio-temporal features from unlabeled video data. We discovered that, despite its simplicity, this method performs surprisingly well when combined with deep learning techniques such as stacking and convolution to learn hierarchical representations. By replacing hand-designed features with our learned features, we achieve classification results superior to all previous published results on the Hollywood2, UCF, KTH and YouTube action recognition datasets. On the challenging Hollywood2 and YouTube action datasets we obtain 53.3 and 75.8 respectively, which are approximately 5 better than the current best published results. Further benefits of this method, such as the ease of training and the efficiency of training and prediction, will also be discussed. You can download our code and learned spatio-temporal features here: http: ai.stanford.edu ∼wzou" ] }
1501.06380
2396094755
This paper reports the use of a document distance-based approach to automatically expand the number of available relevance judgements when these are limited and reduced to only positive judgements. This may happen, for example, when the only available judgements are extracted from a list of references in a published review paper. We compare the results on two document sets: OHSUMED, based on medical research publications, and TREC-8, based on news feeds. We show that evaluations based on these expanded relevance judgements are more reliable than those using only the initially available judgements, especially when the number of available judgements is very limited.
Previous work on the expansion of an initial set of document assessments include the use of Machine Learning. For example, B " u @cite_1 trained over a subset of qrels in order to expand the set of qrels. They showed that evaluation results with the expanded set of qrels had better quality than using the source subset of qrels. Quality of the evaluation was measured by ranking a set of IR systems according to the new expanded qrels, and comparing it against the system ordering produced by the original qrels. In the clinical domain, @cite_3 explored the use of re-ranking methods based on reduced judgements, and found that the use of automatic classifiers would allow to considerably reduce the time required for clinicians to identify a large portion (95 limitations of the classifiers when the initial number of documents was small. Furthermore, in the scenario that we contemplate, where we rely on the list of references of a systematic review as the set of qrels, we do not have information about negative qrels, and therefore a classifier-based approach to expand the set of relevant documents would have to deal with this issue.
{ "cite_N": [ "@cite_1", "@cite_3" ], "mid": [ "2053100920", "168544607" ], "abstract": [ "Information retrieval evaluation based on the pooling method is inherently biased against systems that did not contribute to the pool of judged documents. This may distort the results obtained about the relative quality of the systems evaluated and thus lead to incorrect conclusions about the performance of a particular ranking technique. We examine the magnitude of this effect and explore how it can be countered by automatically building an unbiased set of judgements from the original, biased judgements obtained through pooling. We compare the performance of this method with other approaches to the problem of incomplete judgements, such as bpref, and show that the proposed method leads to higher evaluation accuracy, especially if the set of manual judgements is rich in documents, but highly biased against some systems.", "Searching and selecting articles to be included in systematic reviews is a real challenge for healthcare agencies responsible for publishing these reviews. The current practice of manually reviewing all papers returned by complex hand-crafted boolean queries is human labour-intensive and difficult to maintain. We demonstrate a two-stage searching system that takes advantage of ranked queries and support-vector machine text classification to assist in the retrieval of relevant articles, and to restrict results to higher-quality documents. Our proposed approach shows significant work saved in the systematic review process over a baseline of a keyword-based retrieval system." ] }
1501.06380
2396094755
This paper reports the use of a document distance-based approach to automatically expand the number of available relevance judgements when these are limited and reduced to only positive judgements. This may happen, for example, when the only available judgements are extracted from a list of references in a published review paper. We compare the results on two document sets: OHSUMED, based on medical research publications, and TREC-8, based on news feeds. We show that evaluations based on these expanded relevance judgements are more reliable than those using only the initially available judgements, especially when the number of available judgements is very limited.
Prior work using document distance criteria for expanding the qrels includes @cite_5 , who suggests that this approach may work for a document collection within the medical domain. In this paper we show that this approach improves the quality of evaluation for medical and news reports, and we therefore add further evidence of the plausibility of this method.
{ "cite_N": [ "@cite_5" ], "mid": [ "2086909165" ], "abstract": [ "This paper proposes a document distance-based approach to automatically expand the number of available relevance judgements when those are limited and reduced to only positive judgements. This may happen, for example, when the only available judgements are extracted from a list of references in a published clinical systematic review. We show that evaluations based on these expanded relevance judgements are more reliable than those using only the initially available judgements. We also show the impact of such an evaluation approach as the number of initial judgements decreases." ] }
1501.06412
1731132221
Currently, the quality of a search engine is often determined using so-called topical relevance, i.e., the match between the user intent (expressed as a query) and the content of the document. In this work we want to draw attention to two aspects of retrieval system performance affected by the presentation of results: result attractiveness ("perceived relevance") and immediate usefulness of the snippets ("snippet relevance"). Perceived relevance may influence discoverability of good topical documents and seemingly better rankings may in fact be less useful to the user if good-looking snippets lead to irrelevant documents or vice-versa. And result items on a search engine result page (SERP) with high snippet relevance may add towards the total utility gained by the user even without the need to click those items. We start by motivating the need to collect different aspects of relevance (topical, perceived and snippet relevances) and how these aspects can improve evaluation measures. We then discuss possible ways to collect these relevance aspects using crowdsourcing and the challenges arising from that.
The idea to separate and relevance was suggested by @cite_11 while designing the DBN click model. Unlike earlier click models, it suggests that the likelihood of a user clicking a document depends not on the topical relevance of the document, but rather on its perceived relevance, since the user can only judge based on the result snippet. This idea was later picked up by @cite_8 who showed that while topical and perceived relevance are correlated, there is a noticeable discrepancy between them. They performed a simulated experiment by modeling the user click probability and showed that taking it into account would lead to substantially different ordering of the systems participating in a TREC Web Track.
{ "cite_N": [ "@cite_8", "@cite_11" ], "mid": [ "1965862077", "2099213975" ], "abstract": [ "In batch evaluation of retrieval systems, performance is calculated based on predetermined relevance judgements applied to a list of documents returned by the system for a query. This evaluation paradigm, however, ignores the current standard operation of search systems which require the user to view summaries of documents prior to reading the documents themselves. In this paper we modify the popular IR metrics MAP and P@10 to incorporate the summary reading step of the search process, and study the effects on system rankings using TREC data. Based on a user study, we establish likely disagreements between relevance judgements of summaries and of documents, and use these values to seed simulations of summary relevance in the TREC data. Re-evaluating the runs submitted to the TREC Web Track, we find the average correlation between system rankings and the original TREC rankings is 0.8 (Kendall τ), which is lower than commonly accepted for system orderings to be considered equivalent. The system that has the highest MAP in TREC generally remains amongst the highest MAP systems when summaries are taken into account, but other systems become equivalent to the top ranked system depending on the simulated summary relevance. Given that system orderings alter when summaries are taken into account, the small amount of effort required to judge summaries in addition to documents (19 seconds vs 88 seconds on average in our data) should be undertaken when constructing test collections.", "As with any application of machine learning, web search ranking requires labeled data. The labels usually come in the form of relevance assessments made by editors. Click logs can also provide an important source of implicit feedback and can be used as a cheap proxy for editorial labels. The main difficulty however comes from the so called position bias - urls appearing in lower positions are less likely to be clicked even if they are relevant. In this paper, we propose a Dynamic Bayesian Network which aims at providing us with unbiased estimation of the relevance from the click logs. Experiments show that the proposed click model outperforms other existing click models in predicting both click-through rate and relevance." ] }
1501.06412
1731132221
Currently, the quality of a search engine is often determined using so-called topical relevance, i.e., the match between the user intent (expressed as a query) and the content of the document. In this work we want to draw attention to two aspects of retrieval system performance affected by the presentation of results: result attractiveness ("perceived relevance") and immediate usefulness of the snippets ("snippet relevance"). Perceived relevance may influence discoverability of good topical documents and seemingly better rankings may in fact be less useful to the user if good-looking snippets lead to irrelevant documents or vice-versa. And result items on a search engine result page (SERP) with high snippet relevance may add towards the total utility gained by the user even without the need to click those items. We start by motivating the need to collect different aspects of relevance (topical, perceived and snippet relevances) and how these aspects can improve evaluation measures. We then discuss possible ways to collect these relevance aspects using crowdsourcing and the challenges arising from that.
The idea to separate out relevance appears after the introduction of good abandonment @cite_7 : cases when users abandon a search result page without clicking any results and yet they are satisfied. This may be due to the SERP being rich with instant answers @cite_9 , e.g., a weather widget or a dictionary box, or due to the fact that a query has a precise informational need, that can easily be answered in a result snippet @cite_10 . In fact, as was shown by @cite_1 a big portion of abandoned searches was due to a pre-determined behaviors: users came to a search engine with a prior intention to find an answer directly on a SERP @. This is especially true when considering mobile search where the internet connection can be slow or the user interface is less convenient to use. We complement these works by arguing that good and relevant snippet does not necessarily lead to a complete good abandonment, but rather represents an aspect of utility gained by the user that is currently ignored.
{ "cite_N": [ "@cite_1", "@cite_9", "@cite_10", "@cite_7" ], "mid": [ "2135565673", "2154792348", "2032691242", "2119074598" ], "abstract": [ "The lack of user activity on search results was until recently perceived as a sign of user dissatisfaction from retrieval performance, often, referring to such inactivity as a failed search (negative search abandonment). However, recent studies suggest that some search tasks can be achieved in the contents of the results displayed without the need to click through them (positive search abandonment); thus they emphasize the need to discriminate between successful and failed searches without follow-up clicks. In this paper, we study users’ inactivity on search results in relation to their pursued search goals and investigate the impact of displayed results on user clicking decisions. Our study examines two types of post-query user inactivity: pre-determined and post-determined depending on whether the user started searching with a preset intention to look for answers only within the result snippets and did not intend to click through the results, or the user inactivity was decided after the user had reviewed the list of retrieved documents. Our findings indicate that 27 of web searches in our sample are conducted with a pre-determined intention to look for answers in the results’ list and 75 of them can be satisfied in the contents of the displayed results. Moreover, in nearly half the queries that did not yield result visits, the desired information is found in the result snippets.", "Web search engines have historically focused on connecting people with information resources. For example, if a person wanted to know when their flight to Hyderabad was leaving, a search engine might connect them with the airline where they could find flight status information. However, search engines have recently begun to try to meet people's search needs directly, providing, for example, flight status information in response to queries that include an airline and a flight number. In this paper, we use large scale query log analysis to explore the challenges a search engine faces when trying to meet an information need directly in the search result page. We look at how people's interaction behavior changes when inline content is returned, finding that such content can cannibalize clicks from the algorithmic results. We see that in the absence of interaction behavior, an individual's repeat search behavior can be useful in understanding the content's value. We also discuss some of the ways user behavior can be used to provide insight into when inline answers might better trigger and what types of additional information might be included in the results.", "It is often considered that high abandonment rate corresponds to poor IR system performance. However several studies suggested that there are so called good abandonments, i.e. situations when search engine result page (SERP) contains enough details to satisfy the user information need without necessity to click on search results. In those papers only editorial metrics of SERP were used, and one cannot be sure that situations marked as good abandonments by assessors actually imply user satisfaction. In present work we propose some real-world evidences for good abandonments by calculating correlation between editorial and click metrics.", "Query abandonment by search engine users is generally considered to be a negative signal. In this paper, we explore the concept of good abandonment. We define a good abandonment as an abandoned query for which the user's information need was successfully addressed by the search results page, with no need to click on a result or refine the query. We present an analysis of abandoned internet search queries across two modalities (PC and mobile) in three locales. The goal is to approximate the prevalence of good abandonment, and to identify types of information needs that may lead to good abandonment, across different locales and modalities. Our study has three key findings: First, queries potentially indicating good abandonment make up a significant portion of all abandoned queries. Second, the good abandonment rate from mobile search is significantly higher than that from PC search, across all locales tested. Third, classified by type of information need, the major classes of good abandonment vary dramatically by both locale and modality. Our findings imply that it is a mistake to uniformly consider query abandonment as a negative signal. Further, there is a potential opportunity for search engines to drive additional good abandonment, especially for mobile search users, by improving search features and result snippets." ] }
1501.05700
1659086870
We introduce a new conception of community structure, which we refer to as hidden community structure. Hidden community structure refers to a specific type of overlapping community structure, in which the detection of weak, but meaningful, communities is hindered by the presence of stronger communities. We present Hidden Community Detection HICODE, an algorithm template that identifies both the strong, dominant community structure as well as the weaker, hidden community structure in networks. HICODE begins by first applying an existing community detection algorithm to a network, and then removing the structure of the detected communities from the network. In this way, the structure of the weaker communities becomes visible. Through application of HICODE, we demonstrate that a wide variety of real networks from different domains contain many communities that, though meaningful, are not detected by any of the popular community detection algorithms that we consider. Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against. For example, on a real network of undergraduate students that can be partitioned either by Dorm' (residence hall) or Year', we see that HICODE uncovers the weaker Year' communities with a JCRecall score (a recall-based metric that we define in the text) of over 0.7, while the baseline algorithms achieve scores below 0.2.
Community detection algorithms can be roughly grouped into those that partition the set of nodes in a network and those that find overlapping communities @cite_0 . Our work complements these concepts by introducing the notion of hidden structure, in which stronger community layers obscure deeper, but still meaningful, community structure.
{ "cite_N": [ "@cite_0" ], "mid": [ "2127048411" ], "abstract": [ "The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks." ] }
1501.05700
1659086870
We introduce a new conception of community structure, which we refer to as hidden community structure. Hidden community structure refers to a specific type of overlapping community structure, in which the detection of weak, but meaningful, communities is hindered by the presence of stronger communities. We present Hidden Community Detection HICODE, an algorithm template that identifies both the strong, dominant community structure as well as the weaker, hidden community structure in networks. HICODE begins by first applying an existing community detection algorithm to a network, and then removing the structure of the detected communities from the network. In this way, the structure of the weaker communities becomes visible. Through application of HICODE, we demonstrate that a wide variety of real networks from different domains contain many communities that, though meaningful, are not detected by any of the popular community detection algorithms that we consider. Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against. For example, on a real network of undergraduate students that can be partitioned either by Dorm' (residence hall) or Year', we see that HICODE uncovers the weaker Year' communities with a JCRecall score (a recall-based metric that we define in the text) of over 0.7, while the baseline algorithms achieve scores below 0.2.
A popular community metric is the modularity score, which measures the quality of a partitioning. It is defined as the ratio of the number of edges that are in the same community to the expected number of edges in the same community if the edges had been distributed randomly while preserving degree distribution @cite_9 .
{ "cite_N": [ "@cite_9" ], "mid": [ "2151936673" ], "abstract": [ "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as “modularity” over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets." ] }
1501.05700
1659086870
We introduce a new conception of community structure, which we refer to as hidden community structure. Hidden community structure refers to a specific type of overlapping community structure, in which the detection of weak, but meaningful, communities is hindered by the presence of stronger communities. We present Hidden Community Detection HICODE, an algorithm template that identifies both the strong, dominant community structure as well as the weaker, hidden community structure in networks. HICODE begins by first applying an existing community detection algorithm to a network, and then removing the structure of the detected communities from the network. In this way, the structure of the weaker communities becomes visible. Through application of HICODE, we demonstrate that a wide variety of real networks from different domains contain many communities that, though meaningful, are not detected by any of the popular community detection algorithms that we consider. Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against. For example, on a real network of undergraduate students that can be partitioned either by Dorm' (residence hall) or Year', we see that HICODE uncovers the weaker Year' communities with a JCRecall score (a recall-based metric that we define in the text) of over 0.7, while the baseline algorithms achieve scores below 0.2.
The Louvain method is a heuristic algorithm for modularity maximization that builds a hierarchy of communities by first optimizing modularity locally and then grouping small communities together into larger communities @cite_3 .
{ "cite_N": [ "@cite_3" ], "mid": [ "2131681506" ], "abstract": [ "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks." ] }
1501.05700
1659086870
We introduce a new conception of community structure, which we refer to as hidden community structure. Hidden community structure refers to a specific type of overlapping community structure, in which the detection of weak, but meaningful, communities is hindered by the presence of stronger communities. We present Hidden Community Detection HICODE, an algorithm template that identifies both the strong, dominant community structure as well as the weaker, hidden community structure in networks. HICODE begins by first applying an existing community detection algorithm to a network, and then removing the structure of the detected communities from the network. In this way, the structure of the weaker communities becomes visible. Through application of HICODE, we demonstrate that a wide variety of real networks from different domains contain many communities that, though meaningful, are not detected by any of the popular community detection algorithms that we consider. Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against. For example, on a real network of undergraduate students that can be partitioned either by Dorm' (residence hall) or Year', we see that HICODE uncovers the weaker Year' communities with a JCRecall score (a recall-based metric that we define in the text) of over 0.7, while the baseline algorithms achieve scores below 0.2.
Other algorithms use random walks @cite_15 @cite_1 @cite_5 , with the intuition that a good community is a set of nodes that random walks tend to get trapped' in. One such algorithms is Walktrap, which calculates a random walk-based distance measure between every two nodes, and then clusters using these distances @cite_6 . Another such algorithm is Infomap, which finds clusters by minimizing the expected length of a description of information flow @cite_13 .
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_5", "@cite_15", "@cite_13" ], "mid": [ "", "2033590892", "2028625941", "2242878970", "2164998314" ], "abstract": [ "", "In a representative embodiment of the invention described herein, a well logging system for investigating subsurface formations is controlled by a general purpose computer programmed for real-time operation. The system is cooperatively arranged to provide for all aspects of a well logging operation, such as data acquisition and processing, tool control, information or data storage, and data presentation as a well logging tool is moved through a wellbore. The computer controlling the system is programmed to provide for data acquisition and tool control commands in direct response to asynchronous real-time external events. Such real-time external events may occur, for example, as a result of movement of the logging tool over a selected depth interval, or in response to requests or commands directed to the system by the well logging engineer by means of keyboard input.", "A fuzzy overlapping community is an important kind of overlapping community in which each node belongs to each community to different extents. It exists in many real networks but how to identify a fuzzy overlapping community is still a challenging task. In this work, the concept of local random walk and a new distance metric are introduced. Based on the new distance measurement, the dissimilarity index between each node of a network is calculated firstly. Then in order to keep the original node distance as much as possible, the network structure is mapped into low-dimensional space by the multidimensional scaling (MDS). Finally, the fuzzy c-means clustering is employed to find fuzzy communities in a network. The experimental results show that the proposed algorithm is effective and efficient to identify the fuzzy overlapping communities in both artificial networks and real-world networks.", "Community detection is a common problem in various types of big graphs. It is meaningful to understand the functions and dynamics of networks. The challenges of detecting community for big graphs include high computational cost, no prior information, etc.. In this work, we analyze the process of random walking in graphs, and find out that the weight of an edge gotten by processing the vertices visited by the walker could be an indicator to measure the closeness of vertex connection. Based on this idea, we propose a community detection algorithm for undirected big graphs which consists of three steps, including random walking using a single walker, weight calculating for edges and community detecting. Our algorithm is running in O(n2) without prior information. Experimental results show that our algorithm is capable of detecting the community structure and the overlapping parts of graphs in real-world effectively, and handling the challenges of community detection in big graph era.", "To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network—including physics, chemistry, molecular biology, and medicine—information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences." ] }
1501.05700
1659086870
We introduce a new conception of community structure, which we refer to as hidden community structure. Hidden community structure refers to a specific type of overlapping community structure, in which the detection of weak, but meaningful, communities is hindered by the presence of stronger communities. We present Hidden Community Detection HICODE, an algorithm template that identifies both the strong, dominant community structure as well as the weaker, hidden community structure in networks. HICODE begins by first applying an existing community detection algorithm to a network, and then removing the structure of the detected communities from the network. In this way, the structure of the weaker communities becomes visible. Through application of HICODE, we demonstrate that a wide variety of real networks from different domains contain many communities that, though meaningful, are not detected by any of the popular community detection algorithms that we consider. Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against. For example, on a real network of undergraduate students that can be partitioned either by Dorm' (residence hall) or Year', we see that HICODE uncovers the weaker Year' communities with a JCRecall score (a recall-based metric that we define in the text) of over 0.7, while the baseline algorithms achieve scores below 0.2.
The Link Communities method was one of the first to approach the problem of finding overlapping communties. This algorithm calculates the similarity between adjacent edges and then clusters the links @cite_10 .
{ "cite_N": [ "@cite_10" ], "mid": [ "2110620844" ], "abstract": [ "Network theory has become pervasive in all sectors of biology, from biochemical signalling to human societies, but identification of relevant functional communities has been impaired by many nodes belonging to several overlapping groups at once, and by hierarchical structures. These authors offer a radically different viewpoint, focusing on links rather than nodes, which allows them to demonstrate that overlapping communities and network hierarchies are two faces of the same issue." ] }
1501.05700
1659086870
We introduce a new conception of community structure, which we refer to as hidden community structure. Hidden community structure refers to a specific type of overlapping community structure, in which the detection of weak, but meaningful, communities is hindered by the presence of stronger communities. We present Hidden Community Detection HICODE, an algorithm template that identifies both the strong, dominant community structure as well as the weaker, hidden community structure in networks. HICODE begins by first applying an existing community detection algorithm to a network, and then removing the structure of the detected communities from the network. In this way, the structure of the weaker communities becomes visible. Through application of HICODE, we demonstrate that a wide variety of real networks from different domains contain many communities that, though meaningful, are not detected by any of the popular community detection algorithms that we consider. Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against. For example, on a real network of undergraduate students that can be partitioned either by Dorm' (residence hall) or Year', we see that HICODE uncovers the weaker Year' communities with a JCRecall score (a recall-based metric that we define in the text) of over 0.7, while the baseline algorithms achieve scores below 0.2.
Many algorithms identify communities by expanding seeds' into full communities @cite_16 @cite_4 . Two examples are OSLOM @cite_14 , which uses nodes as seeds and joins together small clusters into statistically significant larger clusters, and Greedy Clique Expansion, which uses cliques as seeds and expands to optimize for a local fitness function based on the number of internal and external links @cite_7 .
{ "cite_N": [ "@cite_14", "@cite_16", "@cite_4", "@cite_7" ], "mid": [ "", "2026981630", "2066090568", "2136576902" ], "abstract": [ "", "Most community detection algorithms are trying to obtain the global information of the network. But increasingly large scale of the current network makes accessing to global information very difficult. In the meanwhile, the network shows power-law distribution and sparse features. And local community mining algorithms which use these features have more advantages over global mining methods. In this paper, we proposed a local community detection algorithm based on the core members named LLCDA (Leader based Local Community Detecting Algorithm) which uses local structural information in the network to optimize a local objective function. A local community can be detected through continuous optimization of the function by expanding from an initial core member computed by a modified PageRank sorting algorithm. The proposed LLCDA algorithm has been tested on both synthetic and real world networks, and it has been compared with other community detecting algorithms. The experimental results validated our proposed LLCDA and showed that significant improvements have been achieved by this technique.", "Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. One of the most successful techniques for finding overlapping communities is based on local optimization and expansion of a community metric around a seed set of vertices. In this paper, we propose an efficient overlapping community detection algorithm using a seed set expansion approach. In particular, we develop new seeding strategies for a personalized PageRank scheme that optimizes the conductance community score. The key idea of our algorithm is to find good seeds, and then expand these seed sets using the personalized PageRank clustering procedure. Experimental results show that this seed set expansion approach outperforms other state-of-the-art overlapping community detection methods. We also show that our new seeding strategies are better than previous strategies, and are thus effective in finding good overlapping clusters in a graph.", "In complex networks it is common for each node to belong to several communities, implying a highly overlapping community structure. Recent advances in benchmarking indicate that the existing community assignment algorithms that are capable of detecting overlapping communities perform well only when the extent of community overlap is kept to modest levels. To overcome this limitation, we introduce a new community assignment algorithm called Greedy Clique Expansion (GCE). The algorithm identifies distinct cliques as seeds and expands these seeds by greedily optimizing a local fitness function. We perform extensive benchmarks on synthetic data to demonstrate that GCE’s good performance is robust across diverse graph topologies. Significantly, GCE is the only algorithm to perform well on these synthetic graphs, in which every node belongs to multiple communities. Furthermore, when put to the task of identifying functional modules in protein interaction data, and college dorm assignments in Facebook friendship data, we find that GCE performs competitively." ] }
1501.05700
1659086870
We introduce a new conception of community structure, which we refer to as hidden community structure. Hidden community structure refers to a specific type of overlapping community structure, in which the detection of weak, but meaningful, communities is hindered by the presence of stronger communities. We present Hidden Community Detection HICODE, an algorithm template that identifies both the strong, dominant community structure as well as the weaker, hidden community structure in networks. HICODE begins by first applying an existing community detection algorithm to a network, and then removing the structure of the detected communities from the network. In this way, the structure of the weaker communities becomes visible. Through application of HICODE, we demonstrate that a wide variety of real networks from different domains contain many communities that, though meaningful, are not detected by any of the popular community detection algorithms that we consider. Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against. For example, on a real network of undergraduate students that can be partitioned either by Dorm' (residence hall) or Year', we see that HICODE uncovers the weaker Year' communities with a JCRecall score (a recall-based metric that we define in the text) of over 0.7, while the baseline algorithms achieve scores below 0.2.
Many algorithms remove edges from the network, and then define a community as a connected component left after the appropriate edges have been removed. Two such algorithms are the classic Girvan-Newman algorithm, which removes edges with high betweenness @cite_11 , and the more recent work of Chen and Hero, which removes edges based on local Fiedler vector centrality @cite_12 .
{ "cite_N": [ "@cite_12", "@cite_11" ], "mid": [ "1647709314", "1971421925" ], "abstract": [ "A deep community in a graph is a connected component that can only be seen after removal of nodes or edges from the rest of the graph. This paper formulates the problem of detecting deep communities as multi-stage node removal that maximizes a new centrality measure, called the local Fiedler vector centrality (LFVC), at each stage. The LFVC is associated with the sensitivity of algebraic connectivity to node or edge removals. We prove that a greedy node edge removal strategy, based on successive maximization of LFVC, has bounded performance loss relative to the optimal, but intractable, combinatorial batch removal strategy. Under a stochastic block model framework, we show that the greedy LFVC strategy can extract deep communities with probability one as the number of observations becomes large. We apply the greedy LFVC strategy to real-world social network datasets. Compared with conventional community detection methods we demonstrate improved ability to identify important communities and key members in the network.", "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases." ] }
1501.05700
1659086870
We introduce a new conception of community structure, which we refer to as hidden community structure. Hidden community structure refers to a specific type of overlapping community structure, in which the detection of weak, but meaningful, communities is hindered by the presence of stronger communities. We present Hidden Community Detection HICODE, an algorithm template that identifies both the strong, dominant community structure as well as the weaker, hidden community structure in networks. HICODE begins by first applying an existing community detection algorithm to a network, and then removing the structure of the detected communities from the network. In this way, the structure of the weaker communities becomes visible. Through application of HICODE, we demonstrate that a wide variety of real networks from different domains contain many communities that, though meaningful, are not detected by any of the popular community detection algorithms that we consider. Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against. For example, on a real network of undergraduate students that can be partitioned either by Dorm' (residence hall) or Year', we see that HICODE uncovers the weaker Year' communities with a JCRecall score (a recall-based metric that we define in the text) of over 0.7, while the baseline algorithms achieve scores below 0.2.
Young, et al present a cascading algorithm that identifies communities by using an existing community detection algorithm, and then removes all edges within communities @cite_2 . This process is repeated several times. We call this method Cascade.'
{ "cite_N": [ "@cite_2" ], "mid": [ "1808737093" ], "abstract": [ "Community detection is the process of assigning nodes and links in significant communities (e.g. clusters, function modules) and its development has led to a better understanding of complex networks. When applied to sizable networks, we argue that most detection algorithms correctly identify prominent communities, but fail to do so across multiple scales. As a result, a significant fraction of the network is left uncharted. We show that this problem stems from larger or denser communities overshadowing smaller or sparser ones, and that this effect accounts for most of the undetected communities and unassigned links. We propose a generic cascading approach to community detection that circumvents the problem. Using real network datasets with two widely used community detection algorithms, we show how cascading detection allows for the detection of the missing communities and results in a significant drop of the fraction of unassigned links." ] }
1501.05821
767034684
In so-called constraint-based testing, symbolic execution is a common technique used as a part of the process to generate test data for imperative programs. Databases are ubiquitous in software and testing of programs manipulating databases is thus essential to enhance the reliability of software. This work proposes and evaluates experimentally a symbolic ex- ecution algorithm for constraint-based testing of database programs. First, we describe SimpleDB, a formal language which offers a minimal and well-defined syntax and seman- tics, to model common interaction scenarios between pro- grams and databases. Secondly, we detail the proposed al- gorithm for symbolic execution of SimpleDB models. This algorithm considers a SimpleDB program as a sequence of operations over a set of relational variables, modeling both the database tables and the program variables. By inte- grating this relational model of the program with classical static symbolic execution, the algorithm can generate a set of path constraints for any finite path to test in the control- flow graph of the program. Solutions of these constraints are test inputs for the program, including an initial content for the database. When the program is executed with respect to these inputs, it is guaranteed to follow the path with re- spect to which the constraints were generated. Finally, the algorithm is evaluated experimentally using representative SimpleDB models.
In future work, we intend to make our technique able to generate inputs for more complex interaction scenarios between databases and programs. First, it would be relevant to evaluate how and up to which extent the symbolic execution mechanism proposed here for simple SQL statements and simple relational database schemas can be generalized to more elaborate ones. Secondly, it should be investigated how dynamic SQL can be integrated with our approach, possibly relying on static analysis @cite_31 @cite_28 or on concolic execution. Thirdly, it happens frequently that SQL statements have a non-deterministic behavior, either because the underlying DBMS executing the statement behaves non-deterministically, or because the database is modified concurrently by several programs. Whether and how the approach proposed here can encompass such non-deterministic behaviors remains a topic for further research.
{ "cite_N": [ "@cite_28", "@cite_31" ], "mid": [ "2124418290", "2003751975" ], "abstract": [ "Since 2002, over 10 of total cyber vulnerabilities were SQL injection vulnerabilities (SQLIVs). This paper presents an algorithm of prepared statement replacement for removing SQLIVs by replacing SQL statements with prepared statements. Prepared statements have a static structure, which prevents SQL injection attacks from changing the logical structure of a prepared statement. We created a prepared statement replacement algorithm and a corresponding tool for automated fix generation. We conducted four case studies of open source projects to evaluate the capability of the algorithm and its automation. The empirical results show that prepared statement code correctly replaced 94 of the SQLIVs in these projects.", "Many data-intensive applications dynamically construct queries in response to client requests and execute them. Java servlets, for example, can create strings that represent SQL queries and then send the queries, using JDBC, to a database server for execution. The servlet programmer enjoys static checking via Java's strong type system. However, the Java type system does little to check for possible errors in the dynamically generated SQL query strings. Thus, a type error in a generated selection query (e.g., comparing a string attribute with an integer) can result in an SQL runtime exception. Currently, such defects must be rooted out through careful testing, or (worse) might be found by customers at runtime. In this article, we present a sound, static program analysis technique to verify that dynamically generated query strings do not contain type errors. We describe our analysis technique and provide soundness results for our static analysis algorithm. We also describe the details of a prototype tool based on the algorithm and present several illustrative defects found in senior software-engineering student-team projects, online tutorial examples, and a real-world purchase order system written by one of the authors." ] }
1501.05821
767034684
In so-called constraint-based testing, symbolic execution is a common technique used as a part of the process to generate test data for imperative programs. Databases are ubiquitous in software and testing of programs manipulating databases is thus essential to enhance the reliability of software. This work proposes and evaluates experimentally a symbolic ex- ecution algorithm for constraint-based testing of database programs. First, we describe SimpleDB, a formal language which offers a minimal and well-defined syntax and seman- tics, to model common interaction scenarios between pro- grams and databases. Secondly, we detail the proposed al- gorithm for symbolic execution of SimpleDB models. This algorithm considers a SimpleDB program as a sequence of operations over a set of relational variables, modeling both the database tables and the program variables. By inte- grating this relational model of the program with classical static symbolic execution, the algorithm can generate a set of path constraints for any finite path to test in the control- flow graph of the program. Solutions of these constraints are test inputs for the program, including an initial content for the database. When the program is executed with respect to these inputs, it is guaranteed to follow the path with re- spect to which the constraints were generated. Finally, the algorithm is evaluated experimentally using representative SimpleDB models.
Finally, our approach allows to be used with respect to any classical code coverage criterion based on the notion of an execution path. Nevertheless, several works @cite_18 @cite_9 @cite_0 @cite_38 @cite_23 propose test adequacy criteria particularly adapted to the testing of database-driven programs. Integrating such particular coverage criteria into our constraint-based approach is a topic of ongoing research.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_9", "@cite_0", "@cite_23" ], "mid": [ "2103881553", "", "2131467114", "2157913501", "2147427167" ], "abstract": [ "Adequacy criteria provide an objective measurement of test quality. Although these criteria are a major research issue in software testing, little work has been specifically targeted towards the testing of database-driven applications. In this paper, two structural coverage criteria are provided for evaluating the adequacy of a test suite for SQL queries that retrieve information from the database. The first deals with the way in which the queries select and join information from different tables and the second with the way in which selected data is further processed. The criteria take into account both the structure and the data loaded in the database, as well as the syntax and semantics of the query. The coverage criteria are subsequently used to develop test inputs of queries drawn from a real-life application. Finally, a number of issues related to the kind of faults that can be detected and the size of the test suite are discussed.", "", "Although a software application always executes within a particular environment, current testing methods have largely ignored these environmental factors. Many applications execute in an environment that contains a database. In this paper, we propose a family of test adequacy criteria that can be used to assess the quality of test suites for database-driven applications. Our test adequacy criteria use dataflow information that is associated with the entities in a relational database. Furthermore, we develop a unique representation of a database-driven application that facilitates the enumeration of database interaction associations. These associations can reflect an application's definition and use of database entities at multiple levels of granularity. The usage of a tool to calculate intraprocedural database interaction associations for two case study applications indicates that our adequacy criteria can be computed with an acceptable time and space overhead.", "Many software applications have a component based on database management systems in which information is generally handled through SQL queries embedded in the application code. When automation of software testing is mentioned in the research, this is normally associated with programs written in imperative and structured languages. However, the problem of automated software testing applied to programs that manage databases using SQL is still an open issue. This paper presents a measurement of the coverage of SQL queries and the tool that automates it. We also show how database test data may be revised and changed using this measurement by means of completing or deleting information to achieve the highest possible value of coverage of queries that have access to the database.", "Database application programs are ubiquitous, so good techniques for testing them are needed. Recently, several research groups have proposed new approaches to generating tests for database applications and for assessing test data adequacy. This paper describes a mutation testing tool, JDAMA (Java Database Application Mutation Analyzer), for Java programs that interact with a database via the JDBC interface. Our approach extends the mutation testing approach for SQL by , by integrating it with analysis and instrumentation of the application bytecode. JDAMA's use is illustrated through a small study which uses mutation scores to compare two test generation techniques for database applications." ] }
1501.05973
2152391012
We introduce and study methods for inferring and learning from correspondences among neurons. The approach enables alignment of data from distinct multiunit studies of nervous systems. We show that the methods for inferring correspondences combine data eectively from cross-animal studies to make joint inferences about behavioral decision making that are not possible with the data from a single animal. We focus on data collection, machine learning, and prediction in the representative and long-studied invertebrate nervous system of the European medicinal leech. Acknowledging the computational intractability of the general problem of identifying correspondences among neurons, we introduce ecient computational procedures for matching neurons across animals. The methods include techniques that adjust for missing cells or additional cells in the dierent data sets that may reect biological or experimental variation. The methods highlight the value harnessing inference and learning in new kinds of computational microscopes for multiunit neurobiological studies.
The work described in this paper builds upon many different sub-areas of machine learning. In particular, the key ingredients include metric learning, correspondence matching and probabilistic dimensionality reduction . Distance metric learning is a fairly active research area. Most of the work in distance metric learning focus on @math -Nearest Neighbor ( @math -NN) classification scenario and often aim to learn a Mahalanobis metric that is consistent with the training data . The distance metric learning method employed in this paper is closest to the work of @cite_1 and @cite_2 , but modified to just consider the sets of similar cells given by the user.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "2168689650", "2104752854" ], "abstract": [ "Graphs are a powerful and versatile tool useful in various subfields of science and engineering. In many applications, for example, in pattern recognition and computer vision, it is required to measure the similarity of objects. When graphs are used for the representation of structured objects, then the problem of measuring object similarity turns into the problem of computing the similarity of graphs, which is also known as graph matching. In this paper, similarity measures on graphs and related algorithms are reviewed. Also theoretical work showing various relations between different similarity measures is discussed. Other topics to be addressed include graph clustering and efficient indexing of large databases of graphs.", "We present an algorithm for learning a quadratic Gaussian metric (Mahalanobis distance) for use in classification tasks. Our method relies on the simple geometric intuition that a good metric is one under which points in the same class are simultaneously near each other and far from points in the other classes. We construct a convex optimization problem whose solution generates such a metric by trying to collapse all examples in the same class to a single point and push examples in other classes infinitely far away. We show that when the metric we learn is used in simple classifiers, it yields substantial improvements over standard alternatives on a variety of problems. We also discuss how the learned metric may be used to obtain a compact low dimensional feature representation of the original input space, allowing more efficient classification with very little reduction in performance." ] }
1501.05892
1837352444
Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the additive white Gaussian noise (AWGN) channel at rates approaching the channel capacity. The codebook is defined in terms of a Gaussian design matrix, and codewords are sparse linear combinations of columns of the matrix. In this paper, we propose an approximate message passing decoder for sparse superposition codes, whose decoding complexity scales linearly with the size of the design matrix. The performance of the decoder is rigorously analyzed and it is shown to asymptotically achieve the AWGN capacity with an appropriate power allocation. Simulation results are provided to demonstrate the performance of the decoder at finite blocklengths. We introduce a power allocation scheme to improve the empirical performance, and demonstrate how the decoding complexity can be significantly reduced by using Hadamard design matrices.
The adaptive successive decoder of Joseph-Barron @cite_7 and the iterative soft-decision decoder of Cho-Barron @cite_21 @cite_4 both have probability of error that decays as @math for any fixed rate @math , but the latter has better empirical performance. Theorem shows that the probability of error for the AMP decoder goes to zero for all @math , but does not give a rate of decay; hence we cannot theoretically compare its performance with the Cho-Barron decoder in @cite_4 . We can, however, compare the two decoders qualitatively.
{ "cite_N": [ "@cite_21", "@cite_4", "@cite_7" ], "mid": [ "2092639489", "2520575406", "2039957496" ], "abstract": [ "Recently sparse superposition codes with iterative term selection have been developed which are mathematically proven to be fast and reliable at any rate below the capacity for the additive white Gaussian noise channel with power control. We improve the performance using a soft decision decoder with Bayes optimal statistics at each iteration, followed by thresholding only at the final step. This presentation includes formulation of the statistics, proof of their distributions, numerical simulations of the performance improvement, and useful identities relating a squared error risk to a posterior probability of error.", "", "For the additive white Gaussian noise channel with average codeword power constraint, sparse superposition codes are developed. These codes are based on the statistical high-dimensional regression framework. In a previous paper, we investigated decoding using the optimal maximum-likelihood decoding scheme. Here, a fast decoding algorithm, called the adaptive successive decoder, is developed. For any rate R less than the capacity C, communication is shown to be reliable with nearly exponentially small error probability. Specifically, for blocklength n, it is shown that the error probability is exponentially small in n logn." ] }
1501.05892
1837352444
Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the additive white Gaussian noise (AWGN) channel at rates approaching the channel capacity. The codebook is defined in terms of a Gaussian design matrix, and codewords are sparse linear combinations of columns of the matrix. In this paper, we propose an approximate message passing decoder for sparse superposition codes, whose decoding complexity scales linearly with the size of the design matrix. The performance of the decoder is rigorously analyzed and it is shown to asymptotically achieve the AWGN capacity with an appropriate power allocation. Simulation results are provided to demonstrate the performance of the decoder at finite blocklengths. We introduce a power allocation scheme to improve the empirical performance, and demonstrate how the decoding complexity can be significantly reduced by using Hadamard design matrices.
An approximate message passing decoder for sparse superposition codes was recently proposed by Barbier and Krzakala in @cite_14 . This decoder has different update rules from the AMP proposed here. A replica-based analysis of the decoder in @cite_14 suggested it could not achieve rates beyond a threshold which was strictly smaller than @math . Subsequently, @cite_23 reported empirical results which show that the performance of the decoder in @cite_14 can be improved by using spatially coupled Hadamard matrices to define the code.
{ "cite_N": [ "@cite_14", "@cite_23" ], "mid": [ "2088934832", "1909826856" ], "abstract": [ "Superposition codes are efficient for the Additive White Gaussian Noise channel. We provide here a replica analysis of the performances of these codes for large signals. We also consider a Bayesian Approximate Message Passing decoder based on a belief-propagation approach, and discuss its performance using the density evolution technic. Our main findings are 1) for the sizes we can access, the message-passing decoder outperforms other decoders studied in the literature 2) its performance is limited by a sharp phase transition and 3) while these codes reach capacity as @math (a crucial parameter in the code) increases, the performance of the message passing decoder worsen as the phase transition goes to lower rates.", "We study the approximate message-passing decoder for sparse superposition coding on the additive white Gaussian noise channel and extend our preliminary work. We use heuristic statistical-physics-based tools, such as the cavity and the replica methods, for the statistical analysis of the scheme. While superposition codes asymptotically reach the Shannon capacity, we show that our iterative decoder is limited by a phase transition similar to the one that happens in low density parity check codes. We consider two solutions to this problem, that both allow to reach the Shannon capacity: 1) a power allocation strategy and 2) the use of spatial coupling, a novelty for these codes that appears to be promising. We present, in particular, simulations, suggesting that spatial coupling is more robust and allows for better reconstruction at finite code lengths. Finally, we show empirically that the use of a fast Hadamard-based operator allows for an efficient reconstruction, both in terms of computational time and memory, and the ability to deal with very large messages." ] }
1501.05936
2279388086
This paper presents a novel approach to including non-instantaneous discrete control transitions in the linear hybrid automaton approach to simulation and verification of hybrid control systems. In this paper we study the control of a continuously evolving analog plant using a controller programmed in a synchronous programming language. We provide extensions to the synchronous subset of the SystemJ programming language for modeling, implementation, and verification of such hybrid systems. We provide a sound rewrite semantics that approximate the evolution of the continuous variables in the discrete domain inspired from the classical supervisory control theory. The resultant discrete time model can be verified using classical model-checking tools. Finally, we show that systems designed using our approach have a higher fidelity than the ones designed using the hybrid automaton approach.
Finally, the work closest to the one described in this article is done by: (1) @cite_2 , where they extend the Esterel language to model timed automata @cite_1 , i.e., ODEs with rate of change always equal to 1. In this proposal we are able to model the more general hybrid rather than its subset timed automaton and (2) @cite_12 , which is a seminal work in extending synchronous imperative languages to model hybrid automaton. This work is extended further and completed by giving a formal treatment by @cite_13 . The work described herein differs significantly from both; @cite_13 and @cite_12 in that they do not approximate the continuous behavior of the plant, instead all discrete transitions are carried out and then a so called continuous phase is launched, which models the continuous evolution of the plant until the invariant condition holds, just like in hybrid automaton. Since these approaches derive their semantics from hybrid automaton, they inherit the same problem described in , i.e., non-zero mode-switch transitions cannot be captured in the semantics.
{ "cite_N": [ "@cite_13", "@cite_1", "@cite_12", "@cite_2" ], "mid": [ "2149007581", "2101508170", "2007068025", "2116912109" ], "abstract": [ "In this paper, we present an extension of the synchronous language Quartz by new kinds of variables, actions and statements for modeling the interaction of synchronous systems with their continuous environment. We present an operational semantics of the obtained hybrid modeling language and moreover show how compilation algorithms that have been originally developed for synchronous languages can be extended to these hybrid programs. Thus, we can automatically translate the hybrid programs to compact symbolic representations of hybrid transition systems that can be immediately used for simulation and formal verification.", "Alur, R. and D.L. Dill, A theory of timed automata, Theoretical Computer Science 126 (1994) 183-235. We propose timed (j&e) automata to model the behavior of real-time systems over time. Our definition provides a simple, and yet powerful, way to annotate state-transition graphs with timing constraints using finitely many real-valued clocks. A timed automaton accepts timed words-infinite sequences in which a real-valued time of occurrence is associated with each symbol. We study timed automata from the perspective of formal language theory: we consider closure properties, decision problems, and subclasses. We consider both nondeterministic and deterministic transition structures, and both Biichi and Muller acceptance conditions. We show that nondeterministic timed automata are closed under union and intersection, but not under complementation, whereas deterministic timed Muller automata are closed under all Boolean operations. The main construction of the paper is an (PSPACE) algorithm for checking the emptiness of the language of a (nondeterministic) timed automaton. We also prove that the universality problem and the language inclusion problem are solvable only for the deterministic automata: both problems are undecidable (II i-hard) in the nondeterministic case and PSPACE-complete in the deterministic case. Finally, we discuss the application of this theory to automatic verification of real-time requirements of finite-state systems.", "Abstract A hallmark of the Esterel language is the combination of perfect synchrony with total orthogonality and powerful constructs for preemption, suspension and trap handling. It is desirable to make this kind of expressiveness available for the description of hybrid systems, that is, systems whose evolution is understood in terms of segment-wise continuous functions over the real time axis. Our approach consists of modifying Esterel concepts, most notably by replacing the discrete time frame by a continuously advancing one. We are then able to state a semantics made up of transitions with closed execution intervals of non-zero length. By an instant we understand an execution interval within this framework. Hybrid signals may change their value during such an instant, non-hybrid ones, that is, classical signals immediately settle to a specific state and keep it the whole time. Time consumption still has to be specified explicitly, namely in that instants reflect jumps among control flow locations defined by pause statements; all other statements take no time in the sense that arbitrarily many of them may be sequentially executed regardless of the instant's duration. A transfer of perfect synchrony from the discrete to the continuous is in this way accomplished. We also consider an example, which is from the automotive domain, traces, bisimilarity and compositionality.", "The goal of TAXYS is to provide a framework for developing real-time embedded code and verifying its correct behavior with respect to quantitative timing requirements. To achieve so, TAXYS connects France Telecom's ESTEREL compiler SAXO-RT with VERIMAG's model-checker KRONOS. TAXYS has been successfully applied to real industrial telecommunication systems, such as a GSM radio link from Alcatel and a phone prototype from France Telecom." ] }
1501.05724
2112769596
Considering the high heterogeneity of the ontologies published on the web, ontology matching is a crucial issue whose aim is to establish links between an entity of a source ontology and one or several entities from a target ontology. Perfectible similarity measures, considered as sources of information, are combined to establish these links. The theory of belief functions is a powerful mathematical tool for combining such uncertain information. In this paper, we introduce a decision process based on a distance measure to identify the best possible matching entities for a given source entity.
Only few ontology matching methods have considered that dealing with uncertainty in a matching process is a crucial issue. We are interested in this section to present some of them where the probability theory @cite_9 and the Dempster-Shafer theory ( @cite_12 , @cite_2 , @cite_5 ) are the main mathematical models used. In @cite_9 , the authors proposed an approach for matching ontologies based on bayesian networks which is an extension of the BayesOWL. The BayesOWL consists in translating an OWL ontology into a bayesian network (BN) through the application of a set of rules and procedures. In order to match two ontologies, first the source and target ontologies are translated into @math and @math respectively. The mapping is processed between the two ontologies as an evidential reasoning between @math and @math . The authors assume that the similarity information between a concept @math from a source ontology and a concept @math from a target ontology is measured by the joint probability distribution P( @math , @math ).
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_12", "@cite_2" ], "mid": [ "1500155021", "1563413502", "64057243", "1990113766" ], "abstract": [ "An ontology matching method (or a matcher) aims at matching every entity (or concept) in one ontology to the most suitable entity (or entities) in another ontology. Usually it is almost impossible to find a perfect match in the second ontology for every entity in the first ontology, so a matcher generally returns a set of possible matches with some weights (uncertainty) attached to each pair of match. In order to improve a matching result, several matchers can be used and the matched results from these matchers are combined with suitable approaches. In this paper, we first propose two new matchers among three matchers we use. We then address the need of dealing with uncertainties in mapping by investigating how some uncertainty reasoning frameworks can be used to combine matching results. We apply both the Dempster Shafer theory of evidence (DS theory) and Possibility Theory to merge the results computed by different matchers. Our experimental results and comparisons with related work indicate that integrating these theories to deal with uncertain ontology matching is a promising way to improve the overall matching results.", "This paper presents our ongoing effort on developing a principled methodology for automatic ontology mapping based on BayesOWL, a probabilistic framework we developed for modeling uncertainty in semantic web. In this approach, the source and target ontologies are first translated into Bayesian networks (BN); the concept mapping between the two ontologies are treated as evidential reasoning between the two translated BNs. Probabilities needed for constructing conditional probability tables (CPT) during translation and for measuring semantic similarity during mapping are learned using text classification techniques where each concept in an ontology is associated with a set of semantically relevant text documents, which are obtained by ontology guided web mining. The basic ideas of this approach are validated by positive results from computer experiments on two small real-world ontologies.", "Ontologies, at least in the form of taxonomies, have proved rather successful, and are employed in many fields, as far apart as biology and finance. Reaching an agreement over a single ontology has proved difficult, and to obtain actual interoperability it is necessary to map the different ontologies. Mapping one entity between a source ontology and one in a target ontology means to compare the first entity with all the entities in the second ontology: matchers analyse different aspects of the entities to identify the similarities. A single matcher can analyse only some aspects, and often has to rely on uncertain information. Therefore combining the outcomes of different matchers can yield better results. In this paper I present a framework that uses Dempster-Shafer as a model for interpreting and combining results computed by the matchers.", "The increasing number of ontologies of the semantic web poses new challenges for ontology mapping. Ontology mapping in the context of question answering can provide more correct results if the mapping process can deal with uncertainty effectively that is caused by the incomplete and inconsistent information used and produced by the mapping process. We present a novel approach of how Dempster-Shafer belief functions can be used to represent uncertain similarities created by both syntactic and semantic similarity algorithms. For ontology mapping in the context of question answering on the semantic web we propose a multi agent framework where agents create dynamic ontology mappings in order to integrate information and provide precise answers for the users query. We also discuss the problems which can be encountered if we have conflicting beliefs between agents in a particular mapping." ] }
1501.05724
2112769596
Considering the high heterogeneity of the ontologies published on the web, ontology matching is a crucial issue whose aim is to establish links between an entity of a source ontology and one or several entities from a target ontology. Perfectible similarity measures, considered as sources of information, are combined to establish these links. The theory of belief functions is a powerful mathematical tool for combining such uncertain information. In this paper, we introduce a decision process based on a distance measure to identify the best possible matching entities for a given source entity.
In @cite_12 , the author viewed ontology matching as a decision making process that must be handled under uncertainty. He presented a generic framework that uses Dempster-Shafer theory as a mathematical model for representing uncertain mappings as well as combining the results of the different matchers. Given two ontologies @math and @math , the frame of discernment represents the Cartesian product e x @math where each hypothesis is the couple @math such as e @math and @math . Each matcher is considered as an expert that returns a similarity measure converted into a basic belief mass. The Dempster rule of combination is used to combine the results provided by a matcher. The pairs with plausibility and belief below a given threshold are discarded. The remaining pairs represent the best mapping for a given entity.
{ "cite_N": [ "@cite_12" ], "mid": [ "64057243" ], "abstract": [ "Ontologies, at least in the form of taxonomies, have proved rather successful, and are employed in many fields, as far apart as biology and finance. Reaching an agreement over a single ontology has proved difficult, and to obtain actual interoperability it is necessary to map the different ontologies. Mapping one entity between a source ontology and one in a target ontology means to compare the first entity with all the entities in the second ontology: matchers analyse different aspects of the entities to identify the similarities. A single matcher can analyse only some aspects, and often has to rely on uncertain information. Therefore combining the outcomes of different matchers can yield better results. In this paper I present a framework that uses Dempster-Shafer as a model for interpreting and combining results computed by the matchers." ] }
1501.05724
2112769596
Considering the high heterogeneity of the ontologies published on the web, ontology matching is a crucial issue whose aim is to establish links between an entity of a source ontology and one or several entities from a target ontology. Perfectible similarity measures, considered as sources of information, are combined to establish these links. The theory of belief functions is a powerful mathematical tool for combining such uncertain information. In this paper, we introduce a decision process based on a distance measure to identify the best possible matching entities for a given source entity.
Although, the authors in @cite_2 handle uncertainty in the matching process, their proposal differ from that proposed in @cite_12 . In fact, they use the Dempster-Shafer theory in a specific context of question answering where including uncertainty may yield to better results. Not like in @cite_12 , they did not give in depth how the frame of discernment is constructed. In addition to that, uncertainty is handled only once the matching is processed. In fact, the similarity matrix is constructed for each matcher. Based on this matrix, the results are modeled using the theory of belief functions and then they are combined.
{ "cite_N": [ "@cite_12", "@cite_2" ], "mid": [ "64057243", "1990113766" ], "abstract": [ "Ontologies, at least in the form of taxonomies, have proved rather successful, and are employed in many fields, as far apart as biology and finance. Reaching an agreement over a single ontology has proved difficult, and to obtain actual interoperability it is necessary to map the different ontologies. Mapping one entity between a source ontology and one in a target ontology means to compare the first entity with all the entities in the second ontology: matchers analyse different aspects of the entities to identify the similarities. A single matcher can analyse only some aspects, and often has to rely on uncertain information. Therefore combining the outcomes of different matchers can yield better results. In this paper I present a framework that uses Dempster-Shafer as a model for interpreting and combining results computed by the matchers.", "The increasing number of ontologies of the semantic web poses new challenges for ontology mapping. Ontology mapping in the context of question answering can provide more correct results if the mapping process can deal with uncertainty effectively that is caused by the incomplete and inconsistent information used and produced by the mapping process. We present a novel approach of how Dempster-Shafer belief functions can be used to represent uncertain similarities created by both syntactic and semantic similarity algorithms. For ontology mapping in the context of question answering on the semantic web we propose a multi agent framework where agents create dynamic ontology mappings in order to integrate information and provide precise answers for the users query. We also discuss the problems which can be encountered if we have conflicting beliefs between agents in a particular mapping." ] }
1501.05724
2112769596
Considering the high heterogeneity of the ontologies published on the web, ontology matching is a crucial issue whose aim is to establish links between an entity of a source ontology and one or several entities from a target ontology. Perfectible similarity measures, considered as sources of information, are combined to establish these links. The theory of belief functions is a powerful mathematical tool for combining such uncertain information. In this paper, we introduce a decision process based on a distance measure to identify the best possible matching entities for a given source entity.
In @cite_5 , the authors focused on integrating uncertainty when matching ontologies. The proposed method modeled and combined the outputs of three ontology matchers. For an entity e @math , the frame of discernment @math is composed of mappings between e and all the concepts in an ontology @math . The different similarity values obtained through the application of the three matchers are interpreted as mass values. Then, a combination of the results of the three matchers is performed.
{ "cite_N": [ "@cite_5" ], "mid": [ "1500155021" ], "abstract": [ "An ontology matching method (or a matcher) aims at matching every entity (or concept) in one ontology to the most suitable entity (or entities) in another ontology. Usually it is almost impossible to find a perfect match in the second ontology for every entity in the first ontology, so a matcher generally returns a set of possible matches with some weights (uncertainty) attached to each pair of match. In order to improve a matching result, several matchers can be used and the matched results from these matchers are combined with suitable approaches. In this paper, we first propose two new matchers among three matchers we use. We then address the need of dealing with uncertainties in mapping by investigating how some uncertainty reasoning frameworks can be used to combine matching results. We apply both the Dempster Shafer theory of evidence (DS theory) and Possibility Theory to merge the results computed by different matchers. Our experimental results and comparisons with related work indicate that integrating these theories to deal with uncertain ontology matching is a promising way to improve the overall matching results." ] }
1501.05200
1526184959
We are motivated by problems that arise in a number of applications such as Online Marketing and Explosives detection, where the observations are usually modeled using Poisson statistics. We model each observation as a Poisson random variable whose mean is a sparse linear superposition of known patterns. Unlike many conventional problems observations here are not identically distributed since they are associated with different sensing modalities. We analyze the performance of a maximum likelihood (ML) decoder, which for our Poisson setting involves a non-linear optimization but yet is computationally tractable. We derive fundamental sample complexity bounds for sparse recovery when the measurements are contaminated with Poisson noise. In contrast to the least-squares linear regression setting with Gaussian noise, we observe that in addition to sparsity, the scale of the parameters also fundamentally impacts @math error in the Poisson setting. We show that our upper bounds are tight under suitable regularity conditions. Specifically, we derive a minimax matching lower bound on the mean-squared error and show that our constrained ML decoder is minimax optimal for this regime.
Parameter estimation for non-identical Poisson distributions has been studied in the context of Generalized Linear Models (GLMs). However, our model is inherently different from the exponential family of GLM models that has been studied in @cite_0 @cite_18 @cite_9 @cite_20 . In particular the GLM model corresponding to the Poisson distributed data studied in the literature has the following form: Therefore, the log likelihood takes the following form: In contrast, in the setting we are interested in, the observations are modeled as follows: and the log likelihood function has the form:
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_18", "@cite_20" ], "mid": [ "2950190315", "1555163866", "2127357443", "2076313409" ], "abstract": [ "High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless @math , a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank matrices and combinations thereof. In such settings, a general approach to estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed structure. This paper provides a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive some existing results, and also to obtain a number of new results on consistency and convergence rates, in both @math -error and related norms. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure corresponding regularized M-estimators have fast convergence rates and which are optimal in many well-studied cases.", "The versatility of exponential families, along with their attendant convexity properties, make them a popular and effective statistical model. A central issue is learning these models in high-dimensions, such as when there is some sparsity pattern of the optimal parameter. This work characterizes a certain strong convexity property of general exponential families, which allow their generalization ability to be quantified. In particular, we show how this property can be used to analyze generic exponential families under L_1 regularization.", "The problem of sparse signal recovery from a relatively small number of noisy measurements has been studied extensively in the recent literature on compressed sensing. However, the focus of those studies appears to be limited to the case of linear projections disturbed by Gaussian noise, and the sparse signal reconstruction problem is treated as linear regression with l 1 -norm regularization constraint. A natural question to ask is whether one can accurately recover sparse signals under different noise assumptions. Herein, we extend the results of [13] to the more general case of exponential-family noise that includes Gaussian noise as a particular case, and yields l 1 -regularized Generalized Linear Model (GLM) regression problem. We show that, under standard restricted isometry property (RIP) assumptions on the design matrix, l 1 -minimization can provide stable recovery of a sparse signal in presence of the exponential-family noise, provided that certain sufficient conditions on the noise distribution are satisfied.", "Consider a sample of size n from a regular exponential family in Pn dimensions. Let 6, denote the maximum likelihood estimator, and consider the case where Pn tends to infinity with n and where (On is a sequence of parameter values in RP-. Moment conditions are provided under which II6 0,JI = O?( p(, n) and On6n On Xnll = Op (p, n), where Xn is the sample mean. The latter result provides normal approximation results when p2 n 0. It is shown by example that even for a single coordinate of (6, in), pn2 n -? 0 may be needed for normal approximation. However, if pn3 2 n O 0, the likelihood ratio test statistic A for a simple hypothesis has a chi-square approximation in the sense that (2 log A pn ) 2Pn D A(0, 1)." ] }
1501.05200
1526184959
We are motivated by problems that arise in a number of applications such as Online Marketing and Explosives detection, where the observations are usually modeled using Poisson statistics. We model each observation as a Poisson random variable whose mean is a sparse linear superposition of known patterns. Unlike many conventional problems observations here are not identically distributed since they are associated with different sensing modalities. We analyze the performance of a maximum likelihood (ML) decoder, which for our Poisson setting involves a non-linear optimization but yet is computationally tractable. We derive fundamental sample complexity bounds for sparse recovery when the measurements are contaminated with Poisson noise. In contrast to the least-squares linear regression setting with Gaussian noise, we observe that in addition to sparsity, the scale of the parameters also fundamentally impacts @math error in the Poisson setting. We show that our upper bounds are tight under suitable regularity conditions. Specifically, we derive a minimax matching lower bound on the mean-squared error and show that our constrained ML decoder is minimax optimal for this regime.
There are several important differences between the two models. We observe that imposing sparsity on @math in Model I corresponds to smaller number of multiplicative terms in the Poisson rates. On the other hand, @math being sparse in Model II results in fewer number of additive terms in the Poisson rates of the corresponding model. At a more fundamental level the loss function (negative log-likelihood) for Model I has an exponential term ( @math ). The assumptions of strong convexity on the feasible cone @math could be proved if elements of @math are independent sub-gaussian draws @cite_0 . Consequently, unlike our case, the issue of signal amplitude no longer arises for this model.
{ "cite_N": [ "@cite_0" ], "mid": [ "2950190315" ], "abstract": [ "High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless @math , a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank matrices and combinations thereof. In such settings, a general approach to estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed structure. This paper provides a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive some existing results, and also to obtain a number of new results on consistency and convergence rates, in both @math -error and related norms. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure corresponding regularized M-estimators have fast convergence rates and which are optimal in many well-studied cases." ] }
1501.05200
1526184959
We are motivated by problems that arise in a number of applications such as Online Marketing and Explosives detection, where the observations are usually modeled using Poisson statistics. We model each observation as a Poisson random variable whose mean is a sparse linear superposition of known patterns. Unlike many conventional problems observations here are not identically distributed since they are associated with different sensing modalities. We analyze the performance of a maximum likelihood (ML) decoder, which for our Poisson setting involves a non-linear optimization but yet is computationally tractable. We derive fundamental sample complexity bounds for sparse recovery when the measurements are contaminated with Poisson noise. In contrast to the least-squares linear regression setting with Gaussian noise, we observe that in addition to sparsity, the scale of the parameters also fundamentally impacts @math error in the Poisson setting. We show that our upper bounds are tight under suitable regularity conditions. Specifically, we derive a minimax matching lower bound on the mean-squared error and show that our constrained ML decoder is minimax optimal for this regime.
We can view model I as an instance of a general class of sparse recovery problems. Indeed, @cite_9 studies the convergence behavior of @math regularized ML estimation for exponential family distributions and GLM in this context. The bounds on error for sparse recovery of the parameter are based on the RE condition. Moreover, in order to get useful bounds on estimation error of GLM, they additionally need the natural sufficient statistic of the exponential family to be sub-gaussian. This condition could clearly be violated in our setting where the data is Poisson distributed and there is no constraint on the sensing matrix to be sub-gaussian.
{ "cite_N": [ "@cite_9" ], "mid": [ "1555163866" ], "abstract": [ "The versatility of exponential families, along with their attendant convexity properties, make them a popular and effective statistical model. A central issue is learning these models in high-dimensions, such as when there is some sparsity pattern of the optimal parameter. This work characterizes a certain strong convexity property of general exponential families, which allow their generalization ability to be quantified. In particular, we show how this property can be used to analyze generic exponential families under L_1 regularization." ] }
1501.05200
1526184959
We are motivated by problems that arise in a number of applications such as Online Marketing and Explosives detection, where the observations are usually modeled using Poisson statistics. We model each observation as a Poisson random variable whose mean is a sparse linear superposition of known patterns. Unlike many conventional problems observations here are not identically distributed since they are associated with different sensing modalities. We analyze the performance of a maximum likelihood (ML) decoder, which for our Poisson setting involves a non-linear optimization but yet is computationally tractable. We derive fundamental sample complexity bounds for sparse recovery when the measurements are contaminated with Poisson noise. In contrast to the least-squares linear regression setting with Gaussian noise, we observe that in addition to sparsity, the scale of the parameters also fundamentally impacts @math error in the Poisson setting. We show that our upper bounds are tight under suitable regularity conditions. Specifically, we derive a minimax matching lower bound on the mean-squared error and show that our constrained ML decoder is minimax optimal for this regime.
More generally, @cite_0 describes a unified framework for analysis of regularized @math estimators in high dimensions. They also mention extension of their framework to GLMs and describe strong convexity" of the objective function on @math as a sufficient condition to obtain consistency of M-estimators under Model I. As described earlier, this requirement of strong convexity on @math is not consistent with our model. In addition the statistical aspects in that work requires that the components of the sensing matrix be characterized by , which we do not require here.
{ "cite_N": [ "@cite_0" ], "mid": [ "2950190315" ], "abstract": [ "High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless @math , a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank matrices and combinations thereof. In such settings, a general approach to estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed structure. This paper provides a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive some existing results, and also to obtain a number of new results on consistency and convergence rates, in both @math -error and related norms. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure corresponding regularized M-estimators have fast convergence rates and which are optimal in many well-studied cases." ] }
1501.05200
1526184959
We are motivated by problems that arise in a number of applications such as Online Marketing and Explosives detection, where the observations are usually modeled using Poisson statistics. We model each observation as a Poisson random variable whose mean is a sparse linear superposition of known patterns. Unlike many conventional problems observations here are not identically distributed since they are associated with different sensing modalities. We analyze the performance of a maximum likelihood (ML) decoder, which for our Poisson setting involves a non-linear optimization but yet is computationally tractable. We derive fundamental sample complexity bounds for sparse recovery when the measurements are contaminated with Poisson noise. In contrast to the least-squares linear regression setting with Gaussian noise, we observe that in addition to sparsity, the scale of the parameters also fundamentally impacts @math error in the Poisson setting. We show that our upper bounds are tight under suitable regularity conditions. Specifically, we derive a minimax matching lower bound on the mean-squared error and show that our constrained ML decoder is minimax optimal for this regime.
Statistical guarantees for sparse recovery in settings similar to model II have been provided in @cite_16 @cite_5 @cite_14 in the context of photon limited measurements. They assume that the observations are distributed as follows @math where elements of the signal @math and sensing matrix are positive, and the sensing matrix satisfies the so-called Flux Preserving assumption: @math
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_16" ], "mid": [ "2125935121", "2112405755", "" ], "abstract": [ "This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and or bounded noise models do not apply to Poisson noise, which is nonadditive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical l2 - l1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.", "This paper provides performance bounds for compressed sensing in the presence of Poisson noise using expander graphs. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and or bounded noise models used in the compressed sensing literature are no longer applicable. In this paper, we develop a novel sensing paradigm based on expander graphs and propose a maximum a posteriori (MAP) algorithm for recovering sparse or compressible signals from Poisson observations. The geometry of the expander graphs and the positivity of the corresponding sensing matrices play a crucial role in establishing the bounds on the signal reconstruction error of the proposed algorithm. We support our results with experimental demonstrations of reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process.", "" ] }
1501.05200
1526184959
We are motivated by problems that arise in a number of applications such as Online Marketing and Explosives detection, where the observations are usually modeled using Poisson statistics. We model each observation as a Poisson random variable whose mean is a sparse linear superposition of known patterns. Unlike many conventional problems observations here are not identically distributed since they are associated with different sensing modalities. We analyze the performance of a maximum likelihood (ML) decoder, which for our Poisson setting involves a non-linear optimization but yet is computationally tractable. We derive fundamental sample complexity bounds for sparse recovery when the measurements are contaminated with Poisson noise. In contrast to the least-squares linear regression setting with Gaussian noise, we observe that in addition to sparsity, the scale of the parameters also fundamentally impacts @math error in the Poisson setting. We show that our upper bounds are tight under suitable regularity conditions. Specifically, we derive a minimax matching lower bound on the mean-squared error and show that our constrained ML decoder is minimax optimal for this regime.
The latter assumption arises in some photon counting applications, like imaging under Poisson noise, where the total number of expected measured photons cannot be larger than the intensity of the original signal. The upper bound on reconstruction error of the constrained ML estimator is given in the paper @cite_5 . Surprisingly, the upper bound scales linearly with the number of measurements. However, this sounds reasonable under the Flux Preserving assumption. In fact this behavior is due to the fact that for a fixed signal intensity, more measurements lead to lower SNR for each observation. As a result, unlike conventional compressive sensing bounds, the estimates do not converge to the ground truth with increasing the sample size. Nevertheless, Flux Preserving constraint does not arise in our setting and consequently the application and methods of analysis are different.
{ "cite_N": [ "@cite_5" ], "mid": [ "2125935121" ], "abstract": [ "This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and or bounded noise models do not apply to Poisson noise, which is nonadditive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical l2 - l1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition." ] }
1501.05140
1555715032
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence. This paper explores the usage of unsupervised rank aggregation methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure of the citation patterns for the community of experts, and from profile information about the experts. We specifically experimented two unsupervised rank aggregation approaches well known in the information retrieval literature, namely CombSUM and CombMNZ. Experiments made over a dataset of academic publications for the area of Computer Science attest for the adequacy of these methods.
Serdyukov and Macdonald have surveyed the most important concepts and representative previous works in the expert finding task @cite_3 @cite_15 . Two of the most popular and well-performing types of methods are the profile-centric and the document-centric approaches @cite_12 @cite_20 . Profile-centric approaches build an expert profile as a pseudo document, by aggregating text segments relevant to the expert @cite_22 . These profiles are latter indexed and used to support the search for experts on a topic. Document-centric approaches are typically based on traditional document retrieval techniques, using the documents directly. In a probabilistic approach to the problem, the first step is to estimate the conditional probability @math of the query topic @math given a document @math . Assuming that the terms co-occurring with an expert can be used to describe him, @math can be used to weight the co-occurrence evidence of experts with @math in documents. The conditional probability @math of an expert candidate @math given a query @math can then be estimated by aggregating all the evidences in all the documents where @math and @math co-occur. Experimental results show that document-centric approaches usually outperform profile-centric approaches @cite_20 .
{ "cite_N": [ "@cite_22", "@cite_3", "@cite_15", "@cite_12", "@cite_20" ], "mid": [ "2126226055", "1566861348", "2007810306", "11620817", "38978401" ], "abstract": [ "Searching an organization's document repositories for experts provides a cost effective solution for the task of expert finding. We present two general strategies to expert searching given a document collection which are formalized using generative probabilistic models. The first of these directly models an expert's knowledge based on the documents that they are associated with, whilst the second locates documents on topic, and then finds the associated expert. Forming reliable associations is crucial to the performance of expert finding systems. Consequently, in our evaluation we compare the different approaches, exploring a variety of associations along with other operational parameters (such as topicality). Using the TREC Enterprise corpora, we show that the second strategy consistently outperforms the first. A comparison against other unsupervised techniques, reveals that our second model delivers excellent performance.", "The automatic search for knowledgeable people in the scope of an organization is a key function which makes modern Enterprise search systems commercially successful and socially demanded. A number of effective approaches to expert finding were recently proposed in academic publications. Although, most of them use reasonably defined measures of personal expertise, they often limit themselves to rather unrealistic and sometimes oversimplified principles. In this thesis, we explore several ways to go beyond state-of-the-art assumptions used in research on expert finding and propose several novel solutions for this and related tasks. First, we describe measures of expertise that do not assume independent occurrence of terms and persons in a document what makes them perform better than the measures based on independence of all entities in a document. One of these measures makes persons central to the process of terms generation in a document. Another one assumes that the position of the person’s mention in a document with respect to the positions of query terms indicates the relation of the person to the document’s relevant content. Second, we find the ways to use not only direct expertise evidence for a person concentrated within the document space of the person’s current employer and only within those organizational documents that mention the person. We successfully utilize the predicting potential of additional indirect expertise evidence publicly available on the Web and in the organizational documents implicitly related to a person. Finally, besides the expert finding methods we proposed, we also demonstrate solutions for the tasks from related domains. In one case, we use several algorithms of multi-step relevance propagation to search for typed entities in Wikipedia. In another case, we suggest generic methods for placing photos uploaded to Flickr on the World map using language models of locations built entirely on the annotations provided by users with a few task specific extensions.", "In an expert search task, the users’ need is to identify people who have relevant expertise to a topic of interest. An expert search system predicts and ranks the expertise of a set of candidate persons with respect to the users’ query. In this paper, we propose a novel approach for predicting and ranking candidate expertise with respect to a query, called the Voting Model for Expert Search. In the Voting Model, we see the problem of ranking experts as a voting problem. We model the voting problem using 12 various voting techniques, which are inspired from the data fusion field. We investigate the effectiveness of the Voting Model and the associated voting techniques across a range of document weighting models, in the context of the TREC 2005 and TREC 2006 Enterprise tracks. The evaluation results show that the voting paradigm is very effective, without using any query or collection-specific heuristics. Moreover, we show that improving the quality of the underlying document representation can significantly improve the retrieval performance of the voting techniques on an expert search task. In particular, we demonstrate that applying field-based weighting models improves the ranking of candidates. Finally, we demonstrate that the relative performance of the voting techniques for the proposed approach is stable on a given task regardless of the used weighting models, suggesting that some of the proposed voting techniques will always perform better than other voting techniques.", "The goal of the enterprise track is to conduct experiments with enterprise data — intranet pages, email archives, document repositories — that reflect the experiences of users in real organisations, such that for example, an email ranking technique that is effective here would be a good choice for deployment in a real multi-user email search application. This involves both understanding user needs in enterprise search and development of appropriate IR techniques.", "The goal of the enterprise track is to conduct experiments with enterprise data — intranet pages, email archives, document repositories — that reflect the experiences of users in real organizations, such that for example, an email ranking technique that is effective here would be a good choice for deployment in a real multi-user email search application. This involves both understanding user needs in enterprise search and development of appropriate IR techniques. The enterprise track began in TREC 2005 as the successor to the web track, and this is reflected in the tasks and measures. While the track takes much of its inspiration from the web track, the foci are on search at the enterprise scale, incorporating non-web data and discovering relationships between entities in the organization. As a result, we have created the first test collections for multi-user email search and expert finding. This year the track has continued using the W3C collection, a crawl of the publicly available web of the World Wide Web Consortium performed in June 2004. This collection contains not only web pages but numerous mailing lists, technical documents and other kinds of data that represent the day-to-day operation of the W3C. Details of the collection may be found in the 2005 track overview (, 2005). Additionally, this year we began creating a repository of information derived from the collection by participants. This data is hosted alongside the W3C collection at NIST. There were two tasks this year, email discussion search and expert search, and both represent refinements of the tasks initially done in 2005. NIST developed topics and relevance judgments for the email discussion search task this year. For expert search, rather than relying on found data as last year, the track participants created the topics and relevance judgments. Twenty-five groups took part across the two tasks." ] }
1501.05140
1555715032
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence. This paper explores the usage of unsupervised rank aggregation methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure of the citation patterns for the community of experts, and from profile information about the experts. We specifically experimented two unsupervised rank aggregation approaches well known in the information retrieval literature, namely CombSUM and CombMNZ. Experiments made over a dataset of academic publications for the area of Computer Science attest for the adequacy of these methods.
Many different authors have proposed sophisticated probabilistic retrieval models, specific to the expert finding task, with basis on the document-centric approach @cite_22 @cite_26 @cite_3 . For instance proposed a two-stage language model combining document relevance and co-occurrence between experts and query terms @cite_24 . Fang and Zhai derived a generative probabilistic model from the probabilistic ranking principle and extend it with query expansion and non-uniform candidate priors @cite_23 . proposed a multiple window based approach for integrating multiple levels of associations between experts and query topics in expert finding @cite_16 . More recently, proposed a unified language model integrating many document features for expert finding @cite_8 . Although the above models are capable of employing different types of associations among query terms, documents and experts, they mostly ignore other important sources of evidence, such as the importance of individual documents, or the co-citation patterns between experts available from citation graphs. In this paper, we offer a principled approach for combining a much larger set of expertise estimates.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_3", "@cite_24", "@cite_23", "@cite_16" ], "mid": [ "2098057544", "2126226055", "2139348969", "1566861348", "2151350269", "2143561480", "2162017692" ], "abstract": [ "One aspect in which retrieving named entities is different from retrieving documents is that the items to be retrieved - persons, locations, organizations - are only indirectly described by documents throughout the collection. Much work has been dedicated to finding references to named entities, in particular to the problems of named entity extraction and disambiguation. However, just as important for retrieval performance is how these snippets of text are combined to build named entity representations. We focus on the TREC expert search task where the goal is to identify people who are knowledgeable on a specific topic. Existing language modeling techniques for expert finding assume that terms and person entities are conditionally independent given a document. We present theoretical and experimental evidence that this simplifying assumption ignores information on how named entities relate to document content. To address this issue, we propose a new document representation which emphasizes text in proximity to entities and thus incorporates sequential information implicit in text. Our experiments demonstrate that the proposed model significantly improves retrieval performance. The main contribution of this work is an effective formal method for explicitly modeling the dependency between the named entities and terms which appear in a document.", "Searching an organization's document repositories for experts provides a cost effective solution for the task of expert finding. We present two general strategies to expert searching given a document collection which are formalized using generative probabilistic models. The first of these directly models an expert's knowledge based on the documents that they are associated with, whilst the second locates documents on topic, and then finds the associated expert. Forming reliable associations is crucial to the performance of expert finding systems. Consequently, in our evaluation we compare the different approaches, exploring a variety of associations along with other operational parameters (such as topicality). Using the TREC Enterprise corpora, we show that the second strategy consistently outperforms the first. A comparison against other unsupervised techniques, reveals that our second model delivers excellent performance.", "We argue that expert finding is sensitive to multiple document features in an organization, and therefore, can benefit from the incorporation of these document features. We propose a unified language model, which integrates multiple document features, namely, multiple levels of associations, PageRank, indegree, internal document structure, and URL length. Our experiments on two TREC Enterprise Track collections, i.e., the W3C and CSIRO datasets, demonstrate that the natures of the two organizational intranets and two types of expert finding tasks, i.e., key contact finding for CSIRO and knowledgeable person finding for W3C, influence the effectiveness of different document features. Our work provides insights into which document features work for certain types of expert finding tasks, and helps design expert finding strategies that are effective for different scenarios.", "The automatic search for knowledgeable people in the scope of an organization is a key function which makes modern Enterprise search systems commercially successful and socially demanded. A number of effective approaches to expert finding were recently proposed in academic publications. Although, most of them use reasonably defined measures of personal expertise, they often limit themselves to rather unrealistic and sometimes oversimplified principles. In this thesis, we explore several ways to go beyond state-of-the-art assumptions used in research on expert finding and propose several novel solutions for this and related tasks. First, we describe measures of expertise that do not assume independent occurrence of terms and persons in a document what makes them perform better than the measures based on independence of all entities in a document. One of these measures makes persons central to the process of terms generation in a document. Another one assumes that the position of the person’s mention in a document with respect to the positions of query terms indicates the relation of the person to the document’s relevant content. Second, we find the ways to use not only direct expertise evidence for a person concentrated within the document space of the person’s current employer and only within those organizational documents that mention the person. We successfully utilize the predicting potential of additional indirect expertise evidence publicly available on the Web and in the organizational documents implicitly related to a person. Finally, besides the expert finding methods we proposed, we also demonstrate solutions for the tasks from related domains. In one case, we use several algorithms of multi-step relevance propagation to search for typed entities in Wikipedia. In another case, we suggest generic methods for placing photos uploaded to Flickr on the World map using language models of locations built entirely on the annotations provided by users with a few task specific extensions.", "A separating device for separating a pair of immiscible components from a fluid mixture, such as separating water from a diesel fuel oil water mixture, includes a housing and a filter cartridge mounted on the housing such that the filter cartridge is oriented substantially horizontally. The housing includes an inlet fitting, an outlet fitting, and a sump for receiving the fluid component separated from the mixture. Because of the orientation of the housing, the coalesced water or heavier component of the fluid mixture tends to collect in the lower portion of the filter cartridge and drains into the sump, where it may be drained periodically. The lighter component or fuel oil is communicated to the outlet port or fitting.", "A common task in many applications is to find persons who are knowledgeable about a given topic (i.e., expert finding). In this paper, we propose and develop a general probabilistic framework for studying expert finding problem and derive two families of generative models (candidate generation models and topic generation models) from the framework. These models subsume most existing language models proposed for expert finding. We further propose several techniques to improve the estimation of the proposed models, including incorporating topic expansion, using a mixture model to model candidate mentions in the supporting documents, and defining an email count-based prior in the topic generation model. Our experiments show that the proposed estimation strategies are all effective to improve retrieval accuracy.", "The Multimedia and Information Systems group at the Knowledge Media Institute of the Open University participated in the Expert Search task of the Enterprise Track in TREC 2006. We have proposed to address three main innovative points in a two-stage language model, which consists of a document relevance model and a cooccurrence model, in order to improve the performance of expert search. The three innovative points are based on characteristics of documents. First, document authority in terms of their PageRanks is considered in the document relevance model. Second, document internal structure is taken into account in the co-occurrence model. Third, we consider multiple levels of associations between experts and query terms in the co-occurrence model. Our experiments on the TREC2006 Expert Search task show that addressing the above three points has led to improved effectiveness of expert search on the W3C dataset." ] }
1501.05140
1555715032
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence. This paper explores the usage of unsupervised rank aggregation methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure of the citation patterns for the community of experts, and from profile information about the experts. We specifically experimented two unsupervised rank aggregation approaches well known in the information retrieval literature, namely CombSUM and CombMNZ. Experiments made over a dataset of academic publications for the area of Computer Science attest for the adequacy of these methods.
In the Scientometrics community, the evaluation of the scientific output of a scientist has also attracted significant interest due to the importance of obtaining unbiased and fair criteria. Most of the existing methods are based on metrics such as the total number of authored papers or the total number of citations. A comprehensive description of many of these metrics can be found in @cite_2 @cite_27 . Simple and elegant indexes, such as the Hirsch index, calculate how broad the research work of a scientist is, accounting for both productivity and impact. Graph centrality metrics inspired on PageRank, calculated over citation or co-authorship graphs, have also been extensively used @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_2" ], "mid": [ "2101599977", "2018392915", "2147032993" ], "abstract": [ "The field of digital libraries (DLs) coalesced in 1994: the first digital library conferences were held that year, awareness of the World Wide Web was accelerating, and the National Science Foundation awarded $24 Million (US) for the Digital Library Initiative (DLI). In this paper we examine the state of the DL domain after a decade of activity by applying social network analysis to the co-authorship network of the past ACM, IEEE, and joint ACM IEEE digital library conferences. We base our analysis on a common binary undirectional network model to represent the co-authorship network, and from it we extract several established network measures. We also introduce a weighted directional network model to represent the co-authorship network, for which we define AuthorRank as an indicator of the impact of an individual author in the network. The results are validated against conference program committee members in the same period. The results show clear advantages of PageRank and AuthorRank over degree, closeness and betweenness centrality metrics. We also investigate the amount and nature of international participation in Joint Conference on Digital Libraries (JCDL).", "Citation analysis helps in evaluating the impact of scientific collections (journals and conferences), publications and scholar authors. In this paper we examine known algorithms that are currently used for Link Analysis Ranking, and present their weaknesses over specific examples. We also introduce new alternative methods specifically designed for citation graphs. We use the SCEAS system as a base platform to introduce these new methods and perform a generalized comparison of all methods. We also introduce an aggregate function for the generation of author ranking based on publication ranking. Finally, we try to evaluate the rank results based on the prizes of 'VLDB 10 Year Award', 'SIGMOD Test of Time Award' and 'SIGMOD E.F.Codd Innovations Award'.", "Citation analysis is performed to evaluate the impact of scientific collections (journals and conferences), publications and scholar authors. In this paper we investigate alternative methods to provide a generalized approach to rank scientific publications. We use the SCEAS system [12] as a base platform to introduce new methods that can be used for ranking scientific publications. Moreover, we tune our approach along the reasoning of the prizes 'VLDB 10 Year Award' and 'SIGMOD Test of Time Award', which have been awarded in the course of the top two database conferences. Our approach can be used to objectively suggest the publications and the respective authors the are more likely to be awarded in the near future at these conferences." ] }
1501.05140
1555715032
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence. This paper explores the usage of unsupervised rank aggregation methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure of the citation patterns for the community of experts, and from profile information about the experts. We specifically experimented two unsupervised rank aggregation approaches well known in the information retrieval literature, namely CombSUM and CombMNZ. Experiments made over a dataset of academic publications for the area of Computer Science attest for the adequacy of these methods.
Previous studies have addressed the problem of combining multiple information retrieval mechanisms through unsupervised rank aggregation, often with basis on methods that take their inspiration on voting protocols proposed in the area of statistics and in the social sciences. Given @math voters (i.e., the different estimators of expertise) and @math objects (i.e., the experts), we can see each voter as returning an ordered list of the @math objects according to their own preferences. From these @math ordered lists, the problem of unsupervised rank aggregation concerns with finding a single consensus list which optimally combines the @math rankings. There are different methods for addressing the problem which, according to Julien Ah-Pine @cite_18 , can be divided into two large families of methods:
{ "cite_N": [ "@cite_18" ], "mid": [ "1481216242" ], "abstract": [ "This paper is concerned with the problem of unsupervised rank aggregation in the context of metasearch in information retrieval. In such tasks, we are given many partial ordered lists of retrieved items provided by many search engines and we want to define a way for aggregating those lists in order to find out a consensus. One classical approach consists in aggregating, for each retrieved item, the scores given by the different search engines. Then, we use the resulting aggregated scores distribution in order to infer a consensus ordered list. In this paper we investigate whether aggregation operators defined in the fields of multi-sensor fusion and multicriteria decision making are of interest for metasearch problems or not. Moreover, another purpose of this paper is to introduce a new aggregation operator, its foundations and its properties. We finally test all these aggregation operators for metasearch tasks using the Letor 2.0 dataset. Our results show that among the studied aggregation functions, the ones which are more compensatory outperform the baseline methods CombSUM and CombMNZ." ] }
1501.05140
1555715032
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence. This paper explores the usage of unsupervised rank aggregation methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure of the citation patterns for the community of experts, and from profile information about the experts. We specifically experimented two unsupervised rank aggregation approaches well known in the information retrieval literature, namely CombSUM and CombMNZ. Experiments made over a dataset of academic publications for the area of Computer Science attest for the adequacy of these methods.
Positional methods - For each object, we consider the preferences (i.e., the scores) given by each voter, aggregating them through some particular technique and finally re-ranking objects using the aggregated preferences. The first positional method was proposed by Borda, but linear and non-linear combinations of preferences, such as their arithmetic mean or the triangular norm, are also frequently used @cite_14 @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_14" ], "mid": [ "1481216242", "2078396654" ], "abstract": [ "This paper is concerned with the problem of unsupervised rank aggregation in the context of metasearch in information retrieval. In such tasks, we are given many partial ordered lists of retrieved items provided by many search engines and we want to define a way for aggregating those lists in order to find out a consensus. One classical approach consists in aggregating, for each retrieved item, the scores given by the different search engines. Then, we use the resulting aggregated scores distribution in order to infer a consensus ordered list. In this paper we investigate whether aggregation operators defined in the fields of multi-sensor fusion and multicriteria decision making are of interest for metasearch problems or not. Moreover, another purpose of this paper is to introduce a new aggregation operator, its foundations and its properties. We finally test all these aggregation operators for metasearch tasks using the Letor 2.0 dataset. Our results show that among the studied aggregation functions, the ones which are more compensatory outperform the baseline methods CombSUM and CombMNZ.", "The TREC-2 project at Virginai Tech focused on methods for combining the evidence from multiple retrieval runs to improve performance over any single retrieval method. This paper describes one such method that has been shown to increase performance by combining the similarity values from five different retrieval runs using both vector space and P-norm extended boolean retrieval methods" ] }
1501.05140
1555715032
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence. This paper explores the usage of unsupervised rank aggregation methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure of the citation patterns for the community of experts, and from profile information about the experts. We specifically experimented two unsupervised rank aggregation approaches well known in the information retrieval literature, namely CombSUM and CombMNZ. Experiments made over a dataset of academic publications for the area of Computer Science attest for the adequacy of these methods.
Majoritarian methods - Pairwise comparison matrices are computed for the objects, mostly based upon the aggregation of order relations using association criteria such as Condorcet's criterion, or distance criteria such as Kendall's distance. Other majoritarian methods have also recently been proposed, using Markov chain models @cite_4 or techniques from multicriteria decision theory @cite_11 .
{ "cite_N": [ "@cite_4", "@cite_11" ], "mid": [ "2051834357", "2000300218" ], "abstract": [ "We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. We develop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can e ectively combat ,\" a serious problem in Web searches. Experiments show that our methods are simple, e cient, and e ective.", "Research in Information Retrieval usually shows performanceimprovement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.). In this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking. In this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another. We show that the proposed method deals well with the Information Retrieval distinctive features. Experimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators." ] }
1501.05140
1555715032
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence. This paper explores the usage of unsupervised rank aggregation methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure of the citation patterns for the community of experts, and from profile information about the experts. We specifically experimented two unsupervised rank aggregation approaches well known in the information retrieval literature, namely CombSUM and CombMNZ. Experiments made over a dataset of academic publications for the area of Computer Science attest for the adequacy of these methods.
Fox and Shaw @cite_14 @cite_18 defined several rank aggregation techniques (e.g., CombSUM and CombMNZ) which have been the object of much IR research since, including in the area of expert search @cite_15 . In our experiments, we compared the CombSUM and CombMNZ unsupervised rank aggregation methods, which are detailed in Section 3.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_15" ], "mid": [ "1481216242", "2078396654", "2007810306" ], "abstract": [ "This paper is concerned with the problem of unsupervised rank aggregation in the context of metasearch in information retrieval. In such tasks, we are given many partial ordered lists of retrieved items provided by many search engines and we want to define a way for aggregating those lists in order to find out a consensus. One classical approach consists in aggregating, for each retrieved item, the scores given by the different search engines. Then, we use the resulting aggregated scores distribution in order to infer a consensus ordered list. In this paper we investigate whether aggregation operators defined in the fields of multi-sensor fusion and multicriteria decision making are of interest for metasearch problems or not. Moreover, another purpose of this paper is to introduce a new aggregation operator, its foundations and its properties. We finally test all these aggregation operators for metasearch tasks using the Letor 2.0 dataset. Our results show that among the studied aggregation functions, the ones which are more compensatory outperform the baseline methods CombSUM and CombMNZ.", "The TREC-2 project at Virginai Tech focused on methods for combining the evidence from multiple retrieval runs to improve performance over any single retrieval method. This paper describes one such method that has been shown to increase performance by combining the similarity values from five different retrieval runs using both vector space and P-norm extended boolean retrieval methods", "In an expert search task, the users’ need is to identify people who have relevant expertise to a topic of interest. An expert search system predicts and ranks the expertise of a set of candidate persons with respect to the users’ query. In this paper, we propose a novel approach for predicting and ranking candidate expertise with respect to a query, called the Voting Model for Expert Search. In the Voting Model, we see the problem of ranking experts as a voting problem. We model the voting problem using 12 various voting techniques, which are inspired from the data fusion field. We investigate the effectiveness of the Voting Model and the associated voting techniques across a range of document weighting models, in the context of the TREC 2005 and TREC 2006 Enterprise tracks. The evaluation results show that the voting paradigm is very effective, without using any query or collection-specific heuristics. Moreover, we show that improving the quality of the underlying document representation can significantly improve the retrieval performance of the voting techniques on an expert search task. In particular, we demonstrate that applying field-based weighting models improves the ranking of candidates. Finally, we demonstrate that the relative performance of the voting techniques for the proposed approach is stable on a given task regardless of the used weighting models, suggesting that some of the proposed voting techniques will always perform better than other voting techniques." ] }
1501.05387
2951135776
For large-scale graph analytics on the GPU, the irregularity of data access and control flow, and the complexity of programming GPUs have been two significant challenges for developing a programmable high-performance graph library. "Gunrock", our graph-processing system designed specifically for the GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on operations on a vertex or edge frontier. Gunrock achieves a balance between performance and expressiveness by coupling high performance GPU computing primitives and optimization strategies with a high-level programming model that allows programmers to quickly develop new graph primitives with small code size and minimal GPU programming knowledge. We evaluate Gunrock on five key graph primitives and show that Gunrock has on average at least an order of magnitude speedup over Boost and PowerGraph, comparable performance to the fastest GPU hardwired primitives, and better performance than any other GPU high-level graph library.
In Medusa @cite_12 , Zhong and He presented their pioneering work on a high-level GPU-based system for parallel graph processing, using a message-passing model. CuSha @cite_30 , targeting a GAS abstraction, implements the parallel-sliding-window (PSW) graph representation on the GPU to avoid non-coalesced memory access. CuSha additionally addresses irregular memory access by preprocessing the graph data structure ( G-Shards''). Both frameworks offer a small set of user-defined APIs but are challenged by load imbalance and thus fail to achieve the same level of performance as low-level GPU graph implementations. MapGraph @cite_26 also adopts the GAS abstraction and achieves some of the best performance results for programmable single-node GPU graph computation.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_12" ], "mid": [ "1965830721", "2123538390", "2128653745" ], "abstract": [ "Vertex-centric graph processing is employed by many popular algorithms (e.g., PageRank) due to its simplicity and efficient use of asynchronous parallelism. The high compute power provided by SIMT architecture presents an opportunity for accelerating these algorithms using GPUs. Prior works of graph processing on a GPU employ Compressed Sparse Row (CSR) form for its space-efficiency; however, CSR suffers from irregular memory accesses and GPU underutilization that limit its performance. In this paper, we present CuSha, a CUDA-based graph processing framework that overcomes the above obstacle via use of two novel graph representations: G-Shards and Concatenated Windows (CW). G-Shards uses a concept recently introduced for non-GPU systems that organizes a graph into autonomous sets of ordered edges called shards. CuSha's mapping of GPU hardware resources on to shards allows fully coalesced memory accesses. CW is a novel representation that enhances the use of shards to achieve higher GPU utilization for processing sparse graphs. Finally, CuSha fully utilizes the GPU power by processing multiple shards in parallel on GPU's streaming multiprocessors. For ease of programming, CuSha allows the user to define the vertex-centric computation and plug it into its framework for parallel processing of large graphs. Our experiments show that CuSha provides significant speedups over the state-of-the-art CSR-based virtual warp-centric method for processing graphs on GPUs.", "High performance graph analytics are critical for a long list of application domains. In recent years, the rapid advancement of many-core processors, in particular graphical processing units (GPUs), has sparked a broad interest in developing high performance parallel graph programs on these architectures. However, the SIMT architecture used in GPUs places particular constraints on both the design and implementation of the algorithms and data structures, making the development of such programs difficult and time-consuming. We present MapGraph, a high performance parallel graph programming framework that delivers up to 3 billion Traversed Edges Per Second (TEPS) on a GPU. MapGraph provides a high-level abstraction that makes it easy to write graph programs and obtain good parallel speedups on GPUs. To deliver high performance, MapGraph dynamically chooses among different scheduling strategies depending on the size of the frontier and the size of the adjacency lists for the vertices in the frontier. In addition, a Structure Of Arrays (SOA) pattern is used to ensure coalesced memory access. Our experiments show that, for many graph analytics algorithms, an implementation, with our abstraction, is up to two orders of magnitude faster than a parallel CPU implementation and is comparable to state-of-the-art, manually optimized GPU implementations. In addition, with our abstraction, new graph analytics can be developed with relatively little effort.", "Graphs are common data structures for many applications, and efficient graph processing is a must for application performance. Recently, the graphics processing unit (GPU) has been adopted to accelerate various graph processing algorithms such as BFS and shortest paths. However, it is difficult to write correct and efficient GPU programs and even more difficult for graph processing due to the irregularities of graph structures. To simplify graph processing on GPUs, we propose a programming framework called Medusa which enables developers to leverage the capabilities of GPUs by writing sequential C C++ code. Medusa offers a small set of user-defined APIs and embraces a runtime system to automatically execute those APIs in parallel on the GPU. We develop a series of graph-centric optimizations based on the architecture features of GPUs for efficiency. Additionally, Medusa is extended to execute on multiple GPUs within a machine. Our experiments show that 1) Medusa greatly simplifies implementation of GPGPU programs for graph processing, with many fewer lines of source code written by developers and 2) the optimization techniques significantly improve the performance of the runtime system, making its performance comparable with or better than manually tuned GPU graph operations." ] }
1501.05028
2950482546
In this work, we study the problem of testing properties of the spectrum of a mixed quantum state. Here one is given @math copies of a mixed state @math and the goal is to distinguish whether @math 's spectrum satisfies some property @math or is at least @math -far in @math -distance from satisfying @math . This problem was promoted in the survey of Montanaro and de Wolf under the name of testing unitarily invariant properties of mixed states. It is the natural quantum analogue of the classical problem of testing symmetric properties of probability distributions. Here, the hope is for algorithms with subquadratic copy complexity in the dimension @math . This is because the "empirical Young diagram (EYD) algorithm" can estimate the spectrum of a mixed state up to @math -accuracy using only @math copies. In this work, we show that given a mixed state @math : (i) @math copies are necessary and sufficient to test whether @math is the maximally mixed state, i.e., has spectrum @math ; (ii) @math copies are necessary and sufficient to test with one-sided error whether @math has rank @math , i.e., has at most @math nonzero eigenvalues; (iii) @math copies are necessary and sufficient to distinguish whether @math is maximally mixed on an @math -dimensional or an @math -dimensional subspace; and (iv) The EYD algorithm requires @math copies to estimate the spectrum of @math up to @math -accuracy, nearly matching the known upper bound. In addition, we simplify part of the proof of the upper bound. Our techniques involve the asymptotic representation theory of the symmetric group; in particular Kerov's algebra of polynomial functions on Young diagrams.
The second result comes from the work of @cite_59 . It can be thought of as a quantum analogue of Fact : Setting @math , Theorem gives a linear lower bound of @math for various properties of spectra. This is in contrast with property testing of probability distributions, in which sublinear algorithms are the main goal, with the Birthday Paradox typically precluding sub- @math -sample algorithms.
{ "cite_N": [ "@cite_59" ], "mid": [ "2096150384" ], "abstract": [ "Schur duality decomposes many copies of a quantum state into subspaces labeled by partitions, a decomposition with applications throughout quantum information theory. Here we consider applying Schur duality to the problem of distinguishing coset states in the standard approach to the hidden subgroup problem.We observe that simply measuring the partition (a procedure we call weak Schur sampling) provides very little information about the hidden subgroup. Furthermore, we show that under quite general assumptions, even a combination of weak Fourier sampling and weak Schur sampling fails to identify the hidden subgroup. We also prove tight bounds on how many coset states are required to solve the hidden subgroup problem by weak Schur sampling, and we relate this question to a quantum version of the collision problem." ] }
1501.05152
2950659331
Do object part localization methods produce bilaterally symmetric results on mirror images? Surprisingly not, even though state of the art methods augment the training set with mirrored images. In this paper we take a closer look into this issue. We first introduce the concept of mirrorability as the ability of a model to produce symmetric results in mirrored images and introduce a corresponding measure, namely the that is defined as the difference between the detection result on an image and the mirror of the detection result on its mirror image. We evaluate the mirrorability of several state of the art algorithms in two of the most intensively studied problems, namely human pose estimation and face alignment. Our experiments lead to several interesting findings: 1) Surprisingly, most of state of the art methods struggle to preserve the mirror symmetry, despite the fact that they do have very similar overall performance on the original and mirror images; 2) the low mirrorability is not caused by training or testing sample bias - all algorithms are trained on both the original images and their mirrored versions; 3) the mirror error is strongly correlated to the localization alignment error (with correlation coefficients around 0.7). Since the mirror error is calculated without knowledge of the ground truth, we show two interesting applications - in the first it is used to guide the selection of difficult samples and in the second to give feedback in a popular Cascaded Pose Regression method for face alignment.
As a method that estimates the quality of the output of a vision system, our method is related to works like the meta-recognition @cite_8 , face recognition score analysis @cite_21 and the recent failure alert @cite_4 for failure prediction. Our method differs from those works in two prominent aspects (1) we focus on fine-grained object part localization problem while they focus on instance level recognition or detection. (2) we do not train any additional models for evaluation while all those methods rely on meta-systems. In the specific application of evaluating the performance of Human Pose Estimation, @cite_26 proposed an evaluation algorithm, however, again such an evaluation requires a meta model and it only works for that specific application.
{ "cite_N": [ "@cite_21", "@cite_4", "@cite_26", "@cite_8" ], "mid": [ "2163022578", "1991671938", "1825375243", "" ], "abstract": [ "This paper presents methods of modeling and predicting face recognition (FR) system performance based on analysis of similarity scores. We define the performance of an FR system as its recognition accuracy, and consider the intrinsic and extrinsic factors affecting its performance. The intrinsic factors of an FR system include the gallery images, the FR algorithm, and the tuning parameters. The extrinsic factors include mainly query image conditions. For performance modeling, we propose the concept of \"perfect recognition\", based on which a performance metric is extracted from perfect recognition similarity scores (PRSS) to relate the performance of an FR system to its intrinsic factors. The PRSS performance metric allows tuning FR algorithm parameters offline for near optimal performance. In addition, the performance metric extracted from query images is used to adjust face alignment parameters online for improved performance. For online prediction of the performance of an FR system on query images, features are extracted from the actual recognition similarity scores and their corresponding PRSS. Using such features, we can predict online if an individual query image can be correctly matched by the FR system, based on which we can reduce the incorrect match rates. Experimental results demonstrate that the performance of an FR system can be significantly improved using the presented methods", "Computer vision systems today fail frequently. They also fail abruptly without warning or explanation. Alleviating the former has been the primary focus of the community. In this work, we hope to draw the community's attention to the latter, which is arguably equally problematic for real applications. We promote two metrics to evaluate failure prediction. We show that a surprisingly straightforward and general approach, that we call ALERT, can predict the likely accuracy (or failure) of a variety of computer vision systems--semantic segmentation, vanishing point and camera parameter estimation, and image memorability prediction--on individual input images. We also explore attribute prediction, where classifiers are typically meant to generalize to new unseen categories. We show that ALERT can be useful in predicting failures of this transfer. Finally, we leverage ALERT to improve the performance of a downstream application of attribute prediction: zero-shot learning. We show that ALERT can outperform several strong baselines for zero-shot learning on four datasets.", "Most current vision algorithms deliver their output 'as is', without indicating whether it is correct or not. In this paper we propose evaluator algorithms that predict if a vision algorithm has succeeded. We illustrate this idea for the case of Human Pose Estimation (HPE). We describe the stages required to learn and test an evaluator, including the use of an annotated ground truth dataset for training and testing the evaluator (and we provide a new dataset for the HPE case), and the development of auxiliary features that have not been used by the (HPE) algorithm, but can be learnt by the evaluator to predict if the output is correct or not. Then an evaluator is built for each of four recently developed HPE algorithms using their publicly available implementations: Eichner and Ferrari [5], [16], [2] and Yang and Ramanan [22]. We demonstrate that in each case the evaluator is able to predict if the algorithm has correctly estimated the pose or not.", "" ] }
1501.04985
1766799846
Assume that two robots are located at the centre of a unit disk. Their goal is to evacuate from the disk through an exit at an unknown location on the boundary of the disk. At any time the robots can move anywhere they choose on the disk, independently of each other, with maximum speed @math . The robots can cooperate by exchanging information whenever they meet. We study algorithms for the two robots to minimize the evacuation time: the time when both robots reach the exit. In [CGGKMP14] the authors gave an algorithm defining trajectories for the two robots yielding evacuation time at most @math and also proved that any algorithm has evacuation time at least @math . We improve both the upper and lower bounds on the evacuation time of a unit disk. Namely, we present a new non-trivial algorithm whose evacuation time is at most @math and show that any algorithm has evacuation time at least @math . To achieve the upper bound, we designed an algorithm which non-intuitively proposes a forced meeting between the two robots, even if the exit has not been found by either of them.
Our problem is also related to the rendezvous problem and the problem of gathering @cite_4 @cite_6 . Indeed our problem can be seen as a version of a rendezvous problem for three robots, where one of them remains stationary.
{ "cite_N": [ "@cite_4", "@cite_6" ], "mid": [ "2132239960", "2010017329" ], "abstract": [ "We apply a new method of analysis to the asymmetric rendezvous search problem on the line (ARSPL). This problem, previously studied in a paper of Alpern and Gal (1995), asks how two blind, speed one players placed a distance d apart on the line, can find each other in minimum expected time. The distance d is drawn from a known cumulative probability distribution G, and the players are faced in random directions. We show that the ARSPL is strategically equivalent to a new problem we call the double linear search problem (DLSP), where an object is placed equiprobably on one of two lines, and equiprobably at positions ±d. A searcher is placed at the origin of each of these lines. The two searchers move with a combined speed of one, to minimize the expected time before one of them finds the object. Using results from a concurrent paper of the first author and J. V. Howard (1998), we solve the DLSP (and hence the ARSPL) for the case where G is convex on its support, and show that the solution is that conjectured in a paper of Baston and Gal (1998).", "In this paper we study the problem of gathering a collection of identical oblivious mobile robots in the same location of the plane. Previous investigations have focused mostly on the unlimited visibility setting, where each robot can always see all the others regardless of their distance.In the more difficult and realistic setting where the robots have limited visibility, the existing algorithmic results are only for convergence (towards a common point, without ever reaching it) and only for semi-synchronous environments, where robots' movements are assumed to be performed instantaneously.In contrast, we study this problem in a totally asynchronous setting, where robots' actions, computations, and movements require a finite but otherwise unpredictable amount of time. We present a protocol that allows anonymous oblivious robots with limited visibility to gather in the same location in finite time, provided they have orientation (i.e., agreement on a coordinate system).Our result indicates that, with respect to gathering, orientation is at least as powerful as instantaneous movements." ] }
1501.04560
2141350700
Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding , to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.
was considered by @cite_3 @cite_56 who introduced a generative model to for user-defined and latent attributes. A simple transductive zero-shot learning algorithm is proposed: averaging the prototype's k-nearest neighbours to exploit the test data attribute distribution. @cite_54 proposed a more elaborate transductive strategy, using graph-based label propagation to exploit the manifold structure of the test data. These studies effectively transform the ZSL task into a transductive semi-supervised learning task @cite_51 with prototypes providing the few labelled instances. Nevertheless, these studies and this paper (as with most previous work @cite_19 @cite_24 @cite_36 ) only consider recognition among the novel classes: unifying zero-shot with supervised learning remains an open challenge @cite_25 .
{ "cite_N": [ "@cite_36", "@cite_54", "@cite_3", "@cite_56", "@cite_24", "@cite_19", "@cite_51", "@cite_25" ], "mid": [ "2077071968", "2151575489", "2150674759", "2003723718", "2134270519", "2128532956", "2136504847", "2950276680" ], "abstract": [ "While knowledge transfer (KT) between object classes has been accepted as a promising route towards scalable recognition, most experimental KT studies are surprisingly limited in the number of object classes considered. To support claims of KT w.r.t. scalability we thus advocate to evaluate KT in a large-scale setting. To this end, we provide an extensive evaluation of three popular approaches to KT on a recently proposed large-scale data set, the ImageNet Large Scale Visual Recognition Competition 2010 data set. In a first setting they are directly compared to one-vs-all classification often neglected in KT papers and in a second setting we evaluate their ability to enable zero-shot learning. While none of the KT methods can improve over one-vs-all classification they prove valuable for zero-shot learning, especially hierarchical and direct similarity based KT. We also propose and describe several extensions of the evaluated approaches that are necessary for this large-scale study.", "Category models for objects or activities typically rely on supervised learning requiring sufficiently large training sets. Transferring knowledge from known categories to novel classes with no or only a few labels is far less researched even though it is a common scenario. In this work, we extend transfer learning with semi-supervised learning to exploit unlabeled instances of (novel) categories with no or only a few labeled instances. Our proposed approach Propagated Semantic Transfer combines three techniques. First, we transfer information from known to novel categories by incorporating external knowledge, such as linguistic or expert-specified information, e.g., by a mid-level layer of semantic attributes. Second, we exploit the manifold structure of novel classes. More specifically we adapt a graph-based learning algorithm - so far only used for semi-supervised learning -to zero-shot and few-shot learning. Third, we improve the local neighborhood in such graph structures by replacing the raw feature-based representation with a mid-level object- or attribute-based representation. We evaluate our approach on three challenging datasets in two different applications, namely on Animals with Attributes and ImageNet for image classification and on MPII Composites for activity recognition. Our approach consistently outperforms state-of-the-art transfer and semi-supervised approaches on all datasets.", "The rapid development of social video sharing platforms has created a huge demand for automatic video classification and annotation techniques, in particular for videos containing social activities of a group of people (e.g. YouTube video of a wedding reception). Recently, attribute learning has emerged as a promising paradigm for transferring learning to sparsely labelled classes in object or single-object short action classification. In contrast to existing work, this paper for the first time, tackles the problem of attribute learning for understanding group social activities with sparse labels. This problem is more challenging because of the complex multi-object nature of social activities, and the unstructured nature of the activity context. To solve this problem, we (1) contribute an unstructured social activity attribute (USAA) dataset with both visual and audio attributes, (2) introduce the concept of semi-latent attribute space and (3) propose a novel model for learning the latent attributes which alleviate the dependence of existing models on exact and exhaustive manual specification of the attribute-space. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multi-media sparse data learning tasks including: multi-task learning, N-shot transfer learning, learning with label noise and importantly zero-shot learning.", "The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.", "We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.", "We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.", "Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.", "This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images." ] }
1501.04560
2141350700
Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding , to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.
In most previous zero-shot learning studies (e.g., direct attribute prediction (DAP) @cite_19 ), the available knowledge (a single prototype per target class) is very limited. There has therefore been recent interest in additionally exploiting the unlabelled target data distribution by transductive learning @cite_54 @cite_56 . However, both @cite_54 and @cite_56 suffer from the projection domain shift problem, and are unable to effectively exploit multiple semantic representations views. In contrast, after embedding, our framework synergistically integrates the low-level feature and semantic representations by transductive multi-view hypergraph label propagation (TMV-HLP). Moreover, TMV-HLP generalises beyond zero-shot to N-shot learning if labelled instances are available for the target classes.
{ "cite_N": [ "@cite_19", "@cite_54", "@cite_56" ], "mid": [ "2128532956", "2151575489", "2003723718" ], "abstract": [ "We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.", "Category models for objects or activities typically rely on supervised learning requiring sufficiently large training sets. Transferring knowledge from known categories to novel classes with no or only a few labels is far less researched even though it is a common scenario. In this work, we extend transfer learning with semi-supervised learning to exploit unlabeled instances of (novel) categories with no or only a few labeled instances. Our proposed approach Propagated Semantic Transfer combines three techniques. First, we transfer information from known to novel categories by incorporating external knowledge, such as linguistic or expert-specified information, e.g., by a mid-level layer of semantic attributes. Second, we exploit the manifold structure of novel classes. More specifically we adapt a graph-based learning algorithm - so far only used for semi-supervised learning -to zero-shot and few-shot learning. Third, we improve the local neighborhood in such graph structures by replacing the raw feature-based representation with a mid-level object- or attribute-based representation. We evaluate our approach on three challenging datasets in two different applications, namely on Animals with Attributes and ImageNet for image classification and on MPII Composites for activity recognition. Our approach consistently outperforms state-of-the-art transfer and semi-supervised approaches on all datasets.", "The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning." ] }
1501.04711
1899804838
This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval. We propose DeepHash: a hashing scheme based on deep networks. Key to making DeepHash work at extremely low bitrates are three important considerations -- regularization, depth and fine-tuning -- each requiring solutions specific to the hashing problem. In-depth evaluation shows that our scheme consistently outperforms state-of-the-art methods across all data sets for both Fisher Vectors and Deep Convolutional Neural Network features, by up to 20 percent over other schemes. The retrieval performance with 256-bit hashes is close to that of the uncompressed floating point features -- a remarkable 512 times compression.
Hashing schemes can be broadly categorized into unsupervised and supervised (including semi-supervised) schemes. Examples of unsupervised schemes are Iterative Quantization @cite_33 , Spectral Hashing @cite_32 , Restricted Boltzmann Machines @cite_6 , while some examples of state-of-the-art supervised schemes include Minimal Loss Hashing @cite_47 , Kernel-based Supervised Hashing @cite_29 , Ranking-based Supervised Hashing @cite_2 and Column Generation Hashing @cite_17 . Supervised hashing schemes are typically applied to the semantic retrieval problem. In this work, we are focused on instance retrieval: semantic retrieval is outside the scope of this work.
{ "cite_N": [ "@cite_33", "@cite_29", "@cite_32", "@cite_6", "@cite_2", "@cite_47", "@cite_17" ], "mid": [ "2084363474", "2171790913", "", "2099866409", "2126210882", "2221852422", "2122205543" ], "abstract": [ "This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods.", "Fast retrieval methods are critical for large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sub-linear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several large-scale datasets, and show that it enables accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.", "", "Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6 better than the score of Netflix's own system.", "Hashing techniques have been intensively investigated in the design of highly efficient search engines for large-scale computer vision applications. Compared with prior approximate nearest neighbor search approaches like tree-based indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. However, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pair wise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage list wise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via list wise supervision can provide superior search accuracy without incurring heavy computational overhead.", "We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "Fast nearest neighbor searching is becoming an increasingly important tool in solving many large-scale problems. Recently a number of approaches to learning data-dependent hash functions have been developed. In this work, we propose a column generation based method for learning data-dependent hash functions on the basis of proximity comparison information. Given a set of triplets that encode the pairwise proximity comparison information, our method learns hash functions that preserve the relative comparison relationships in the data as well as possible within the large-margin learning framework. The learning procedure is implemented using column generation and hence is named CGHash. At each iteration of the column generation procedure, the best hash function is selected. Unlike most other hashing methods, our method generalizes to new data points naturally; and has a training objective which is convex, thus ensuring that the global optimum can be identified. Experiments demonstrate that the proposed method learns compact binary codes and that its retrieval performance compares favorably with state-of-the-art methods when tested on a few benchmark datasets." ] }
1501.04711
1899804838
This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval. We propose DeepHash: a hashing scheme based on deep networks. Key to making DeepHash work at extremely low bitrates are three important considerations -- regularization, depth and fine-tuning -- each requiring solutions specific to the hashing problem. In-depth evaluation shows that our scheme consistently outperforms state-of-the-art methods across all data sets for both Fisher Vectors and Deep Convolutional Neural Network features, by up to 20 percent over other schemes. The retrieval performance with 256-bit hashes is close to that of the uncompressed floating point features -- a remarkable 512 times compression.
There is plenty of work on binary codes for descriptors like SIFT or GIST @cite_33 @cite_19 @cite_31 @cite_22 @cite_32 @cite_29 @cite_27 @cite_47 @cite_4 @cite_18 @cite_0 . There is comparatively little work on hashing descriptors like Fisher Vectors (FV) which are two orders of magnitude higher in dimensionality. @cite_7 propose ternary quantization of FV, quantizing each dimension to +1,-1 or 0. also explore Locality Sensitive Hashing @cite_38 and Spectral Hashing @cite_32 . Spectral Hashing performs poorly at high rates, while LSH and simple ternary quantization need thousands of bits to achieve good performance. propose the popular Iterative Quantization (ITQ) scheme and apply it to GIST @cite_33 . In subsequent work, @cite_5 focus on generating very long codes for global descriptors, and the Bilinear Projection-based Binary Codes (BPBC) scheme requires tens of thousands of bits to match the performance of the uncompressed global descriptor. propose Product Quantization (PQ) for obtaining compact representations @cite_23 . While this produces compact descriptors, the resulting representation is not binary and cannot be compared with Hamming distances. As opposed to previous work, our focus is on generating extremely compact binary representations for FV and DCNN features in the 64 bits-1024 bits range.
{ "cite_N": [ "@cite_38", "@cite_47", "@cite_18", "@cite_4", "@cite_33", "@cite_22", "@cite_7", "@cite_29", "@cite_32", "@cite_0", "@cite_19", "@cite_27", "@cite_23", "@cite_5", "@cite_31" ], "mid": [ "2156106197", "2221852422", "", "2154956324", "2084363474", "1468978781", "2071027807", "2171790913", "", "2150782236", "2044195942", "1992371516", "1984309565", "2162064258", "" ], "abstract": [ "We consider the problem of establishing visual correspondences in a distributed and rate-efficient fashion by broadcasting compact descriptors. Establishing visual correspondences is a critical task before other vision tasks can be performed in a wireless camera network. We propose the use of coarsely quantized random projections of descriptors to build binary hashes, and use the Hamming distance between binary hashes as the matching criterion. In this work, we derive the analytic relationship of Hamming distance between the binary hashes to Euclidean distance between the original descriptors. We present experimental verification of our result, and show that for the task of finding visual correspondences, sending binary hashes is more rate-efficient than prior approaches.", "We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "", "The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. Our approach uses recently developed machine learning techniques to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using our scheme, it is possible to perform real-time searches with millions from the Internet using a single large PC and obtain recognition results comparable to the full descriptor. Using our codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques.", "This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods.", "Algorithms to rapidly search massive image or video collections are critical for many vision applications, including visual search, content-based retrieval, and non-parametric models for object recognition. Recent work shows that learned binary projections are a powerful way to index large collections according to their content. The basic idea is to formulate the projections so as to approximately preserve a given similarity function of interest. Having done so, one can then search the data efficiently using hash tables, or by exploring the Hamming ball volume around a novel query. Both enable sub-linear time retrieval with respect to the database size. Further, depending on the design of the projections, in some cases it is possible to bound the number of database examples that must be searched in order to achieve a given level of accuracy.", "The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach.", "Fast retrieval methods are critical for large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sub-linear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several large-scale datasets, and show that it enables accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.", "", "Establishing visual correspondences is an essential component of many computer vision problems, and is often done with robust, local feature-descriptors. Transmission and storage of these descriptors are of critical importance in the context of mobile distributed camera networks and large indexing problems. We propose a framework for computing low bit-rate feature descriptors with a 20× reduction in bit rate. The framework is low complexity and has significant speed-up in the matching stage. We represent gradient histograms as tree structures which can be efficiently compressed. We show how to efficiently compute distances between descriptors in their compressed representation eliminating the need for decoding. We perform a comprehensive performance comparison with SIFT, SURF, and other low bit-rate descriptors and show that our proposed CHoG descriptor outperforms existing schemes.", "Large scale image search has recently attracted considerable attention due to easy availability of huge amounts of data. Several hashing methods have been proposed to allow approximate but highly efficient search. Unsupervised hashing methods show good performance with metric distances but, in image search, semantic similarity is usually given in terms of labeled pairs of images. There exist supervised hashing methods that can handle such semantic similarity but they are prone to overfitting when labeled data is small or noisy. Moreover, these methods are usually very slow to train. In this work, we propose a semi-supervised hashing method that is formulated as minimizing empirical error on the labeled data while maximizing variance and independence of hash bits over the labeled and unlabeled data. The proposed method can handle both metric as well as semantic similarity. The experimental results on two large datasets (up to one million samples) demonstrate its superior performance over state-of-the-art supervised and unsupervised methods.", "Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 .", "This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.", "Recent advances in visual recognition indicate that to achieve good retrieval and classification accuracy on large-scale datasets like Image Net, extremely high-dimensional visual descriptors, e.g., Fisher Vectors, are needed. We present a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensionality using compact bilinear projections instead of a single large projection matrix. This method achieves comparable retrieval and classification accuracy to the original descriptors and to the state-of-the-art Product Quantization approach while having orders of magnitude faster code generation time and smaller memory footprint.", "" ] }
1501.04277
2950657369
In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows features. The multiplicative form of half-quadratic optimization is used to optimize the non-convex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.
Many subspace clustering methods have been proposed @cite_10 @cite_14 @cite_5 @cite_13 . In this work, we focus on the recent graph based subspace clustering methods @cite_17 @cite_16 @cite_14 @cite_8 @cite_19 . These methods are based on the spectral clustering, and its first step aims to construct an affinity (or graph) matrix which is close to be block diagonal, with zero elements corresponding to data pair from different subspaces. After the affinity matrix is learned, the Normalized Cut @cite_12 is employed to segment the data into multiple clusters. For a given data matrix @math , where @math denotes the feature dimension and @math is the number of data points, the most recent methods, including L1-graph @cite_17 or Sparse Subspace Clustering (SSC) @cite_16 , Low-Rank Representation (LRR) @cite_14 @cite_8 , Multi-Subspace Representation (MSR) @cite_15 and Least Squares Representation (LSR) @cite_19 learn the affinity matrix @math by solving the following common problem
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_10", "@cite_19", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "79405465", "1997201895", "2052311585", "", "2081195900", "200673309", "2003217181", "2136540140", "2121947440", "" ], "abstract": [ "We propose low-rank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowest-rank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlike the well-known sparse representation (SR), which computes the sparsest representation of each data vector individually, LRR aims at finding the lowest-rank representation of a collection of vectors jointly. LRR better captures the global structure of data, giving a more effective tool for robust subspace segmentation from corrupted data. Both theoretical and experimental results show that LRR is a promising tool for subspace segmentation.", "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.", "Over the past few years, several methods for segmenting a scene containing multiple rigidly moving objects have been proposed. However, most existing methods have been tested on a handful of sequences only, and each method has been often tested on a different set of sequences. Therefore, the comparison of different methods has been fairly limited. In this paper, we compare four 3D motion segmentation algorithms for affine cameras on a benchmark of 155 motion sequences of checkerboard, traffic, and articulated scenes.", "", "The key task in graph-oriented learning is con-structing an informative graph to model the geometrical and discriminant structure of a data manifold. Since traditional graph construction methods are sensitive to noise and less datum-adaptive to changes in density, a new graph construction method so-calledl1-Graph has been proposed [1] recently. A graph construction method needs to have two important properties: sparsity and locality. However, the l1-Graph is strong in sparsity property, but weak in locality. In order to overcome such limitation, we propose a new method of constructing an informative graph using automatic group sparse regularization based on the work ofl1-Graph, which is called as group sparse graph (GroupSp-Graph). The newly developed GroupSp-Graph has the same noise-insensitive property asl1-Graph, and also can successively preserve the group and local information in the graph. In other words, the proposed group sparse graph has both properties of sparsity and locality simultaneously. Furthermore, we integrate the proposed graph with several graph-oriented learning algorithms: spectral em-bedding, spectral clustering, subspace learning and manifold regularized non-negative matrix factorization. The empirical studies on benchmark data sets show that the proposed algo-rithms achieve considerable improvement over classic graph constructing methods and the l1-Graph method in various learning task.", "This paper presents the multi-subspace discovery problem and provides a theoretical solution which is guaranteed to recover the number of subspaces, the dimensions of each subspace, and the members of data points of each subspace simultaneously. We further propose a data representation model to handle noisy real world data. We develop a novel optimization approach to learn the presented model which is guaranteed to converge to global optimizers. As applications of our models, we first apply our solutions as preprocessing in a series of machine learning problems, including clustering, classification, and semisupervised learning. We found that our method automatically obtains robust data presentation which preserves the affine subspace structures of high dimensional data and generate more accurate results in the learning tasks. We also establish a robust standalone classifier which directly utilizes our sparse and low rank representation model. Experimental results indicate our methods improve the quality of data by preprocessing and the standalone classifier outperforms some state-of-the-art learning approaches.", "We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all other data points. In general, finding such a SR is NP hard. Our key contribution is to show that, under mild assumptions, the SR can be obtained exactly' by using l1 optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this SR. Our method can handle noise, outliers as well as missing data. We apply our subspace clustering algorithm to the problem of segmenting multiple motions in video. Experiments on 167 video sequences show that our approach significantly outperforms state-of-the-art methods.", "Sparse subspace learning has drawn more and more attentions recently. However, most of the sparse subspace learning methods are unsupervised and unsuitable for classification tasks. In this paper, a new sparse subspace learning algorithm called discriminant sparse neighborhood preserving embedding (DSNPE) is proposed by adding the discriminant information into sparse neighborhood preserving embedding (SNPE). DSNPE not only preserves the sparse reconstructive relationship of SNPE, but also sufficiently utilizes the global discriminant structures from the following two aspects: (1) maximum margin criterion (MMC) is added into the objective function of DSNPE; (2) only the training samples with the same label as the current sample are used to compute the sparse reconstructive relationship. Extensive experiments on three face image datasets (Yale, Extended Yale B and AR) demonstrate the effectiveness of the proposed DSNPE method.", "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "" ] }
1501.04277
2950657369
In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows features. The multiplicative form of half-quadratic optimization is used to optimize the non-convex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.
The above methods share the common formulation as shown in ). The Frobenius norm and L21 norm are used as the loss function while the L1 norm, nuclear norm and Frobenius are used to control the affinity matrix. Different formulations require different solvers for these problems. In this work, we show that the L1 norm, L21 norm and nuclear norm all satisfy certain conditions, and thus the previous subspace clustering methods, including SSC, LRR and MSR, can be unified within a general framework from the perspective of half-quadratic optimization @cite_11 . The relationship between the general framework and the previous optimization methods for sparse and low rank minimization is also presented in this work.
{ "cite_N": [ "@cite_11" ], "mid": [ "2030507150" ], "abstract": [ "We address the minimization of regularized convex cost functions which are customarily used for edge-preserving restoration and reconstruction of signals and images. In order to accelerate computation, the multiplicative and the additive half-quadratic reformulation of the original cost-function have been pioneered in Geman and Reynolds [IEEE Trans. Pattern Anal. Machine Intelligence, 14 (1992), pp. 367--383] and Geman and Yang IEEE Trans. Image Process., 4 (1995), pp. 932--946]. The alternate minimization of the resultant (augmented) cost-functions has a simple explicit form. The goal of this paper is to provide a systematic analysis of the convergence rate achieved by these methods. For the multiplicative and additive half-quadratic regularizations, we determine their upper bounds for their root-convergence factors. The bound for the multiplicative form is seen to be always smaller than the bound for the additive form. Experiments show that the number of iterations required for convergence for the multiplicative form is always less than that for the additive form. However, the computational cost of each iteration is much higher for the multiplicative form than for the additive form. The global assessment is that minimization using the additive form of half-quadratic regularization is faster than using the multiplicative form. When the additive form is applicable, it is hence recommended. Extensive experiments demonstrate that in our MATLAB implementation, both methods are substantially faster (in terms of computational times) than the standard MATLAB Optimization Toolbox routines used in our comparison study." ] }
1501.04301
2952586546
We present WiGest: a system that leverages changes in WiFi signal strength to sense in-air hand gestures around the user's mobile device. Compared to related work, WiGest is unique in using standard WiFi equipment, with no modi-fications, and no training for gesture recognition. The system identifies different signal change primitives, from which we construct mutually independent gesture families. These families can be mapped to distinguishable application actions. We address various challenges including cleaning the noisy signals, gesture type and attributes detection, reducing false positives due to interfering humans, and adapting to changing signal polarity. We implement a proof-of-concept prototype using off-the-shelf laptops and extensively evaluate the system in both an office environment and a typical apartment with standard WiFi access points. Our results show that WiGest detects the basic primitives with an accuracy of 87.5 using a single AP only, including through-the-wall non-line-of-sight scenarios. This accuracy in-creases to 96 using three overheard APs. In addition, when evaluating the system using a multi-media player application, we achieve a classification accuracy of 96 . This accuracy is robust to the presence of other interfering humans, highlighting WiGest's ability to enable future ubiquitous hands-free gesture-based interaction with mobile devices.
Gesture recognition systems generally adopt various techniques such as computer vision @cite_19 , inertial sensors @cite_35 , ultra-sonic @cite_5 , and infrared electromagnetic radiation (e.g. on Samsung S4). While promising, these techniques suffer from limitations such as sensitivity to lighting, high installation and instrumentation overhead, demanding dedicated sensors to be worn or installed, or requiring line-of-sight communication between the user and the sensor. These limitations promoted exploiting WiFi, already installed on most user devices and abundant within infrastructure, for and recognition as detailed in this section.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_35" ], "mid": [ "2060280062", "2153200718", "2100147865" ], "abstract": [ "We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.", "Gesture is becoming an increasingly popular means of interacting with computers. However, it is still relatively costly to deploy robust gesture recognition sensors in existing mobile platforms. We present SoundWave, a technique that leverages the speaker and microphone already embedded in most commodity devices to sense in-air gestures around the device. To do this, we generate an inaudible tone, which gets frequency-shifted when it reflects off moving objects like the hand. We measure this shift with the microphone to infer various gestures. In this note, we describe the phenomena and detection algorithm, demonstrate a variety of gestures, and present an informal evaluation on the robustness of this approach across different devices and people.", "Computer vision and inertial measurement have made it possible for people to interact with computers using whole-body gestures. Although there has been rapid growth in the uses and applications of these systems, their ubiquity has been limited by the high cost of heavily instrumenting either the environment or the user. In this paper, we use the human body as an antenna for sensing whole-body gestures. Such an approach requires no instrumentation to the environment, and only minimal instrumentation to the user, and thus enables truly mobile applications. We show robust gesture recognition with an average accuracy of 93 across 12 whole-body gestures, and promising results for robust location classification within a building. In addition, we demonstrate a real-time interactive system which allows a user to interact with a computer using whole-body gestures" ] }
1501.04301
2952586546
We present WiGest: a system that leverages changes in WiFi signal strength to sense in-air hand gestures around the user's mobile device. Compared to related work, WiGest is unique in using standard WiFi equipment, with no modi-fications, and no training for gesture recognition. The system identifies different signal change primitives, from which we construct mutually independent gesture families. These families can be mapped to distinguishable application actions. We address various challenges including cleaning the noisy signals, gesture type and attributes detection, reducing false positives due to interfering humans, and adapting to changing signal polarity. We implement a proof-of-concept prototype using off-the-shelf laptops and extensively evaluate the system in both an office environment and a typical apartment with standard WiFi access points. Our results show that WiGest detects the basic primitives with an accuracy of 87.5 using a single AP only, including through-the-wall non-line-of-sight scenarios. This accuracy in-creases to 96 using three overheard APs. In addition, when evaluating the system using a multi-media player application, we achieve a classification accuracy of 96 . This accuracy is robust to the presence of other interfering humans, highlighting WiGest's ability to enable future ubiquitous hands-free gesture-based interaction with mobile devices.
Work in this area represents the most recent efforts to detect fine-grained mobility and human gestures by leveraging RF signals. WiVi @cite_40 uses an inverse synthetic aperture radar (ISAR) technique by treating the motion of a human body as an antenna array to track the resulting RF beam, thus enabling radar-like vision and simple through-wall gesture-based communication. Finally, WiSee @cite_17 is a fine-grained gesture recognition system that builds on DopLink @cite_28 and @cite_26 by exploiting the doppler shift in narrow bands extracted form wide-band OFDM transmissions to recognize nine different human gesturers.
{ "cite_N": [ "@cite_28", "@cite_40", "@cite_26", "@cite_17" ], "mid": [ "2131425694", "2129045955", "", "2141336889" ], "abstract": [ "Mobile and embedded electronics are pervasive in today's environment. As such, it is necessary to have a natural and intuitive way for users to indicate the intent to connect to these devices from a distance. We present DopLink, an ultrasonic-based device selection approach. It utilizes the already embedded audio hardware in smart devices to determine if a particular device is being pointed at by another device (i.e., the user waves their mobile phone at a target in a pointing motion). We evaluate the accuracy of DopLink in a controlled user study, showing that, within 3 meters, it has an average accuracy of 95 for device selection and 97 for finding relative device position. Finally, we show three applications of DopLink: rapid device pairing, home automation, and multi-display synchronization.", "Wi-Fi signals are typically information carriers between a transmitter and a receiver. In this paper, we show that Wi-Fi can also extend our senses, enabling us to see moving objects through walls and behind closed doors. In particular, we can use such signals to identify the number of people in a closed room and their relative locations. We can also identify simple gestures made behind a wall, and combine a sequence of gestures to communicate messages to a wireless receiver without carrying any transmitting device. The paper introduces two main innovations. First, it shows how one can use MIMO interference nulling to eliminate reflections off static objects and focus the receiver on a moving target. Second, it shows how one can track a human by treating the motion of a human body as an antenna array and tracking the resulting RF beam. We demonstrate the validity of our design by building it into USRP software radios and testing it in office buildings.", "", "This paper presents WiSee, a novel gesture recognition system that leverages wireless signals (e.g., Wi-Fi) to enable whole-home sensing and recognition of human gestures. Since wireless signals do not require line-of-sight and can traverse through walls, WiSee can enable whole-home gesture recognition using few wireless sources. Further, it achieves this goal without requiring instrumentation of the human body with sensing devices. We implement a proof-of-concept prototype of WiSee using USRP-N210s and evaluate it in both an office environment and a two- bedroom apartment. Our results show that WiSee can identify and classify a set of nine gestures with an average accuracy of 94 ." ] }